Jan 26 11:17:29 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 11:17:29 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:29 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 11:17:30 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 11:17:30 crc kubenswrapper[4867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 11:17:30 crc kubenswrapper[4867]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 11:17:30 crc kubenswrapper[4867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 11:17:30 crc kubenswrapper[4867]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 11:17:30 crc kubenswrapper[4867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 11:17:30 crc kubenswrapper[4867]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.415741 4867 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418871 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418899 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418904 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418909 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418915 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418920 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418924 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418929 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418935 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418939 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418945 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418949 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418955 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418960 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418964 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418968 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418971 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418975 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418980 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418985 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418989 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418993 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.418997 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419001 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419005 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419011 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419018 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419024 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419029 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419035 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419040 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419045 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419049 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419052 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419057 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419061 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419066 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419069 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419073 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419077 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419081 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419086 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419090 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419095 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419100 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419105 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419109 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419113 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419116 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419120 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419124 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419129 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419132 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419137 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419141 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419145 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419148 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419152 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419156 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419159 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419163 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419169 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419173 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419176 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419180 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419183 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419187 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419191 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419195 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419198 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.419201 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419463 4867 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419478 4867 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419487 4867 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419494 4867 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419504 4867 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419510 4867 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419517 4867 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419523 4867 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419528 4867 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419533 4867 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419539 4867 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419543 4867 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419548 4867 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419552 4867 flags.go:64] FLAG: --cgroup-root="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419556 4867 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419561 4867 flags.go:64] FLAG: --client-ca-file="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419565 4867 flags.go:64] FLAG: --cloud-config="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419570 4867 flags.go:64] FLAG: --cloud-provider="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419574 4867 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419579 4867 flags.go:64] FLAG: --cluster-domain="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419583 4867 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419588 4867 flags.go:64] FLAG: --config-dir="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419592 4867 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419598 4867 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419604 4867 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419608 4867 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419613 4867 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419617 4867 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419622 4867 flags.go:64] FLAG: --contention-profiling="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419626 4867 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419631 4867 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419636 4867 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419640 4867 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419646 4867 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419650 4867 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419655 4867 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419664 4867 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419670 4867 flags.go:64] FLAG: --enable-server="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419675 4867 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419682 4867 flags.go:64] FLAG: --event-burst="100" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419688 4867 flags.go:64] FLAG: --event-qps="50" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419694 4867 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419698 4867 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419703 4867 flags.go:64] FLAG: --eviction-hard="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419709 4867 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419713 4867 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419717 4867 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419722 4867 flags.go:64] FLAG: --eviction-soft="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419727 4867 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419731 4867 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419735 4867 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419739 4867 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419744 4867 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419748 4867 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419753 4867 flags.go:64] FLAG: --feature-gates="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419759 4867 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419763 4867 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419768 4867 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419772 4867 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419776 4867 flags.go:64] FLAG: --healthz-port="10248" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419780 4867 flags.go:64] FLAG: --help="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419785 4867 flags.go:64] FLAG: --hostname-override="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419789 4867 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419793 4867 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419797 4867 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419802 4867 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419805 4867 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419810 4867 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419816 4867 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419821 4867 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419826 4867 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419830 4867 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419835 4867 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419840 4867 flags.go:64] FLAG: --kube-reserved="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419845 4867 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419849 4867 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419853 4867 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419857 4867 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419862 4867 flags.go:64] FLAG: --lock-file="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419866 4867 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419870 4867 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419874 4867 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419881 4867 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419885 4867 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419890 4867 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419894 4867 flags.go:64] FLAG: --logging-format="text" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419898 4867 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419904 4867 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419917 4867 flags.go:64] FLAG: --manifest-url="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419922 4867 flags.go:64] FLAG: --manifest-url-header="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419928 4867 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419934 4867 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419941 4867 flags.go:64] FLAG: --max-pods="110" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419947 4867 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419952 4867 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419971 4867 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419975 4867 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419980 4867 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419985 4867 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.419991 4867 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420007 4867 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420014 4867 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420019 4867 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420025 4867 flags.go:64] FLAG: --pod-cidr="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420030 4867 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420039 4867 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420044 4867 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420050 4867 flags.go:64] FLAG: --pods-per-core="0" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420057 4867 flags.go:64] FLAG: --port="10250" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420065 4867 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420072 4867 flags.go:64] FLAG: --provider-id="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420078 4867 flags.go:64] FLAG: --qos-reserved="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420084 4867 flags.go:64] FLAG: --read-only-port="10255" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420089 4867 flags.go:64] FLAG: --register-node="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420095 4867 flags.go:64] FLAG: --register-schedulable="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420100 4867 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420109 4867 flags.go:64] FLAG: --registry-burst="10" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420113 4867 flags.go:64] FLAG: --registry-qps="5" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420117 4867 flags.go:64] FLAG: --reserved-cpus="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420122 4867 flags.go:64] FLAG: --reserved-memory="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420128 4867 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420132 4867 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420136 4867 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420141 4867 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420146 4867 flags.go:64] FLAG: --runonce="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420150 4867 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420154 4867 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420159 4867 flags.go:64] FLAG: --seccomp-default="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420163 4867 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420167 4867 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420172 4867 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420177 4867 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420181 4867 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420186 4867 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420191 4867 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420196 4867 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420200 4867 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420205 4867 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420209 4867 flags.go:64] FLAG: --system-cgroups="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420214 4867 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420238 4867 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420242 4867 flags.go:64] FLAG: --tls-cert-file="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420246 4867 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420251 4867 flags.go:64] FLAG: --tls-min-version="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420257 4867 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420261 4867 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420265 4867 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420270 4867 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420274 4867 flags.go:64] FLAG: --v="2" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420280 4867 flags.go:64] FLAG: --version="false" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420286 4867 flags.go:64] FLAG: --vmodule="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420292 4867 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420297 4867 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420446 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420451 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420456 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420460 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420464 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420468 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420472 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420477 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420482 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420486 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420490 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420495 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420508 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420513 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420516 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420521 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420525 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420529 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420533 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420537 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420542 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420545 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420549 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420552 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420556 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420560 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420564 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420568 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420572 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420576 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420580 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420583 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420588 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420592 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420596 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420600 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420604 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420608 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420612 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420617 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420621 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420625 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420629 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420633 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420647 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420651 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420655 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420659 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420663 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420668 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420673 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420677 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420682 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420687 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420693 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420697 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420703 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420707 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420713 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420718 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420722 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420726 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420730 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420734 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420738 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420742 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420746 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420750 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420754 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420757 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.420761 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.420897 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.429852 4867 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.429895 4867 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.429967 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.429976 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.429980 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.429984 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.429989 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.429993 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.429996 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430000 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430004 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430008 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430013 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430017 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430023 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430030 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430036 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430042 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430047 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430051 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430057 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430061 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430065 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430068 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430073 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430077 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430081 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430085 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430088 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430092 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430095 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430099 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430102 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430106 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430109 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430112 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430116 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430119 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430124 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430129 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430133 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430137 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430141 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430145 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430148 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430152 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430155 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430159 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430162 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430165 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430172 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430175 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430178 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430182 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430185 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430188 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430192 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430196 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430200 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430203 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430207 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430210 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430214 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430232 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430236 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430239 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430242 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430246 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430249 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430254 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430258 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430262 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.430266 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.430273 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431210 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431249 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431254 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431259 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431267 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431274 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431279 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431285 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431290 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431294 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431305 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431313 4867 feature_gate.go:330] unrecognized feature gate: Example Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431317 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431322 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431326 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431330 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431335 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431339 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431347 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431351 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431356 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431360 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431367 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431378 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431383 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431389 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431394 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431399 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431404 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431409 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431413 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431419 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431423 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431428 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431432 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431437 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431447 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431451 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431455 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431460 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431464 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431470 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431474 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431478 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431483 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431487 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431494 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431499 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431507 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431512 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431517 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431522 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431526 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431531 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431537 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431542 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431546 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431553 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431558 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431562 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431571 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431576 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431580 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431585 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431590 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431595 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431600 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431605 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431609 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431613 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.431618 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.431625 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.431946 4867 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.434817 4867 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.434921 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.435423 4867 server.go:997] "Starting client certificate rotation" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.435449 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.435619 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 22:11:41.027549123 +0000 UTC Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.435722 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.440495 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.442568 4867 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.442606 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.454177 4867 log.go:25] "Validated CRI v1 runtime API" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.466898 4867 log.go:25] "Validated CRI v1 image API" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.468346 4867 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.470688 4867 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-11-13-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.471064 4867 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.489002 4867 manager.go:217] Machine: {Timestamp:2026-01-26 11:17:30.487845579 +0000 UTC m=+0.186420509 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:db81a289-a49c-46ba-99b1-fd2eecfd5410 BootID:b5a0e2ac-5e06-462a-99e0-d57b8e5cb754 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:31:b3:6d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:31:b3:6d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:17:b9:ec Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f3:f5:75 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4b:b7:9e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a7:86:7d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:ee:e6:66:02:03 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:3d:45:eb:dd:96 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.489264 4867 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.489511 4867 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.489901 4867 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.490102 4867 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.490154 4867 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.490404 4867 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.490414 4867 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.490629 4867 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.490670 4867 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.490941 4867 state_mem.go:36] "Initialized new in-memory state store" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.491028 4867 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.491827 4867 kubelet.go:418] "Attempting to sync node with API server" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.491856 4867 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.491886 4867 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.491903 4867 kubelet.go:324] "Adding apiserver pod source" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.491917 4867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.493417 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.493422 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.493709 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.493737 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.494321 4867 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.494672 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.495269 4867 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.495913 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.495950 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.495993 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496003 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496027 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496041 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496054 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496072 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496084 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496093 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496106 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496115 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496525 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.496972 4867 server.go:1280] "Started kubelet" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.497170 4867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.497411 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.497322 4867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.498656 4867 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 11:17:30 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.500128 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.500195 4867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.500829 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:21:24.034813265 +0000 UTC Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.500953 4867 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.500984 4867 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.501198 4867 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.501819 4867 server.go:460] "Adding debug handlers to kubelet server" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.501935 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.503140 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.503205 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.505259 4867 factory.go:55] Registering systemd factory Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.505316 4867 factory.go:221] Registration of the systemd container factory successfully Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.505304 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="200ms" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.503605 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e43cccd1eb8ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 11:17:30.496915692 +0000 UTC m=+0.195490602,LastTimestamp:2026-01-26 11:17:30.496915692 +0000 UTC m=+0.195490602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.516266 4867 factory.go:153] Registering CRI-O factory Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.516313 4867 factory.go:221] Registration of the crio container factory successfully Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.516383 4867 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.516405 4867 factory.go:103] Registering Raw factory Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.516420 4867 manager.go:1196] Started watching for new ooms in manager Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.519555 4867 manager.go:319] Starting recovery of all containers Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520547 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520654 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520680 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520699 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520717 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520735 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520774 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520791 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520812 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520826 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520841 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520855 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520870 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520889 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520933 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520949 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520967 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520979 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.520991 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521004 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521021 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521041 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521054 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521066 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521081 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521096 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521359 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521410 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521426 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521440 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521454 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521468 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521483 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521529 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521543 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521557 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521571 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521587 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521606 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521622 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521640 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521656 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521682 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521702 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521720 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521758 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521776 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521791 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521804 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521819 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521833 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521851 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521874 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521892 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521911 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521930 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521950 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521964 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.521978 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522019 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522042 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522058 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522073 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522087 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522100 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522114 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522137 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522152 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522163 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522176 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522188 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522206 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522235 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522248 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522261 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522273 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522286 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522300 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522315 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522329 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522343 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522356 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522373 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522388 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522402 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522415 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522428 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522439 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522451 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522464 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522478 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522490 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522502 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522516 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522534 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522545 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522558 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522571 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522584 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522598 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522611 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522624 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522636 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522649 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522683 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522696 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522711 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.522726 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523512 4867 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523552 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523575 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523592 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523611 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523626 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523642 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523660 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523676 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523700 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523723 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523742 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523759 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523775 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523793 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523808 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523824 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523840 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523857 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523873 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523888 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523901 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523916 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523929 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.523950 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524002 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524018 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524034 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524052 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524067 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524084 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524098 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524114 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524129 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524146 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524159 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524174 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524190 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524207 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524239 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524254 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524272 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524290 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524307 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524346 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524363 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524379 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524396 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524412 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524429 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524449 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524465 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524483 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524501 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524522 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524541 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524559 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524576 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524598 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524615 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524632 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524649 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524666 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524686 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524702 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524719 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524738 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524759 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524776 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524796 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524813 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524832 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524885 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524903 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524922 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524938 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524955 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524972 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.524995 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525012 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525027 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525042 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525057 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525072 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525088 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525103 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525132 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525149 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525164 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525180 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525196 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525211 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525246 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525261 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525276 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525292 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525307 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525322 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525346 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525367 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525381 4867 reconstruct.go:97] "Volume reconstruction finished" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.525391 4867 reconciler.go:26] "Reconciler: start to sync state" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.546336 4867 manager.go:324] Recovery completed Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.557583 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.560155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.560212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.560250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.560372 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.561277 4867 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.561304 4867 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.561327 4867 state_mem.go:36] "Initialized new in-memory state store" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.562451 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.562493 4867 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.562517 4867 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.562564 4867 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 11:17:30 crc kubenswrapper[4867]: W0126 11:17:30.566349 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.566456 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.592052 4867 policy_none.go:49] "None policy: Start" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.593291 4867 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.593375 4867 state_mem.go:35] "Initializing new in-memory state store" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.602377 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.662627 4867 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.666571 4867 manager.go:334] "Starting Device Plugin manager" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.666668 4867 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.666685 4867 server.go:79] "Starting device plugin registration server" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.667293 4867 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.667317 4867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.667513 4867 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.667614 4867 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.667622 4867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.677719 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.707778 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="400ms" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.767712 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.769271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.769329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.769342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.769376 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.769957 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.863289 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.863570 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.864981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.865028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.865041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.865276 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.865560 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.865623 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.866560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.866604 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.866618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.867174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.867203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.867237 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.867379 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.867880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.867927 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868387 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868531 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868576 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.868737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.869242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.869270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.869278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.869371 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.869519 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.869563 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.869992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870257 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870277 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.870850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929385 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929452 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929477 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929518 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929537 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929587 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929644 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929744 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929762 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.929798 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.970175 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.971183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.971238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.971252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:30 crc kubenswrapper[4867]: I0126 11:17:30.971281 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 11:17:30 crc kubenswrapper[4867]: E0126 11:17:30.971735 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031035 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031128 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031164 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031197 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031270 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031298 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031304 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031287 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031445 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031442 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031491 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031494 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031488 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031578 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031602 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031621 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031639 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031638 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031683 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031714 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031818 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.031905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: E0126 11:17:31.109091 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="800ms" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.210572 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.228141 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: W0126 11:17:31.240193 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ee9c5942e3bae9addf2b32ee9ca0a7d913e572896df85875ecb6870cdc2691a8 WatchSource:0}: Error finding container ee9c5942e3bae9addf2b32ee9ca0a7d913e572896df85875ecb6870cdc2691a8: Status 404 returned error can't find the container with id ee9c5942e3bae9addf2b32ee9ca0a7d913e572896df85875ecb6870cdc2691a8 Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.247002 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.251722 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.252425 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:31 crc kubenswrapper[4867]: W0126 11:17:31.260829 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-889f153b6f2b945832e8189e452902b354f2eb3b65f9f266bb49b40eaf384a63 WatchSource:0}: Error finding container 889f153b6f2b945832e8189e452902b354f2eb3b65f9f266bb49b40eaf384a63: Status 404 returned error can't find the container with id 889f153b6f2b945832e8189e452902b354f2eb3b65f9f266bb49b40eaf384a63 Jan 26 11:17:31 crc kubenswrapper[4867]: W0126 11:17:31.265318 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a9a9ecc01e0027a747b4efced8000ec38ae02bb12167da1349a7899905dccdaf WatchSource:0}: Error finding container a9a9ecc01e0027a747b4efced8000ec38ae02bb12167da1349a7899905dccdaf: Status 404 returned error can't find the container with id a9a9ecc01e0027a747b4efced8000ec38ae02bb12167da1349a7899905dccdaf Jan 26 11:17:31 crc kubenswrapper[4867]: W0126 11:17:31.319606 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:31 crc kubenswrapper[4867]: E0126 11:17:31.319764 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.372134 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.374469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.374518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.374529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.374563 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 11:17:31 crc kubenswrapper[4867]: E0126 11:17:31.375110 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.498289 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.501304 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:40:55.569005592 +0000 UTC Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.568621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a9a9ecc01e0027a747b4efced8000ec38ae02bb12167da1349a7899905dccdaf"} Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.570596 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"889f153b6f2b945832e8189e452902b354f2eb3b65f9f266bb49b40eaf384a63"} Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.572643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ee9c5942e3bae9addf2b32ee9ca0a7d913e572896df85875ecb6870cdc2691a8"} Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.576940 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6824fbff95d8a53b080e2e0f084567b11a76c137a4cbb02f65f6f2da9a7c3476"} Jan 26 11:17:31 crc kubenswrapper[4867]: I0126 11:17:31.577781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ce570e58dc6e2fe012bd347cda9ae720a9c2a7d0141adbd4e65f67a900ea986b"} Jan 26 11:17:31 crc kubenswrapper[4867]: W0126 11:17:31.601757 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:31 crc kubenswrapper[4867]: E0126 11:17:31.601838 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:31 crc kubenswrapper[4867]: E0126 11:17:31.704987 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e43cccd1eb8ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 11:17:30.496915692 +0000 UTC m=+0.195490602,LastTimestamp:2026-01-26 11:17:30.496915692 +0000 UTC m=+0.195490602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 11:17:31 crc kubenswrapper[4867]: E0126 11:17:31.910366 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="1.6s" Jan 26 11:17:32 crc kubenswrapper[4867]: W0126 11:17:32.033590 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:32 crc kubenswrapper[4867]: E0126 11:17:32.033710 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:32 crc kubenswrapper[4867]: W0126 11:17:32.090208 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:32 crc kubenswrapper[4867]: E0126 11:17:32.090296 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.175660 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.178213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.178274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.178288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.178319 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 11:17:32 crc kubenswrapper[4867]: E0126 11:17:32.178835 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.499108 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.502162 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:57:52.956123499 +0000 UTC Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.583354 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c" exitCode=0 Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.583451 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.583535 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.585366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.585419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.585483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.585817 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe" exitCode=0 Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.585901 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.585921 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.587162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.587194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.587202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.596960 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.597024 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.597044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.597055 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.597173 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.598076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.598109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.598121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.602358 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e" exitCode=0 Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.602450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.602505 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.603321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.603357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.603367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.604155 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752" exitCode=0 Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.604211 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752"} Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.604335 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.604876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.604896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.604907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.605063 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.605663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.605679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.605688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:32 crc kubenswrapper[4867]: I0126 11:17:32.626526 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 11:17:32 crc kubenswrapper[4867]: E0126 11:17:32.627746 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.505109 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:35:47.615834478 +0000 UTC Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.609889 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23" exitCode=0 Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.609965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.610033 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.610977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.611015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.611027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.611313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d8ca8a196d11248401898d6c6591931638d1ebf8675414d0e588454dbc1da626"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.611399 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.612389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.612414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.612422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.619776 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.619769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.619836 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.619850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.620838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.620876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.620886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.627441 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.627515 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.627531 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.627543 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.627544 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec"} Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.628536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.628581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.628591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.779773 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.781966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.782030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.782044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:33 crc kubenswrapper[4867]: I0126 11:17:33.782075 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.505344 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:59:41.575828022 +0000 UTC Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.635066 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390"} Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.635253 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.636477 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.636511 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.636520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.637140 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e" exitCode=0 Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.637235 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.637316 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.637374 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.637588 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e"} Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.637655 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:34 crc kubenswrapper[4867]: I0126 11:17:34.638597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.060478 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.060701 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.062484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.062571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.062592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.506335 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:15:25.253465798 +0000 UTC Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.513708 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.645858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92"} Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.645941 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.645951 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7"} Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.645999 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.646993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.647038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.647051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.681687 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.681925 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.683316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.683370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:35 crc kubenswrapper[4867]: I0126 11:17:35.683382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.506782 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:40:40.8013857 +0000 UTC Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.657197 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.657210 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.657211 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a"} Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.657306 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c"} Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.657321 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.657328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0"} Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.658457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.658493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.658510 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.658459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.658698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.658752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:36 crc kubenswrapper[4867]: I0126 11:17:36.686545 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 11:17:37 crc kubenswrapper[4867]: I0126 11:17:37.131986 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 11:17:37 crc kubenswrapper[4867]: I0126 11:17:37.507816 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:32:39.636818547 +0000 UTC Jan 26 11:17:37 crc kubenswrapper[4867]: I0126 11:17:37.660459 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:37 crc kubenswrapper[4867]: I0126 11:17:37.662350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:37 crc kubenswrapper[4867]: I0126 11:17:37.662401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:37 crc kubenswrapper[4867]: I0126 11:17:37.662417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.145312 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.145651 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.147128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.147172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.147188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.152234 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.508922 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:51:49.06960895 +0000 UTC Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.664787 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.664790 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.666275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.666329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.666346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.666285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.666395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.666412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.984396 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.984700 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.984784 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.987304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.987364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:38 crc kubenswrapper[4867]: I0126 11:17:38.987378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.031067 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.057499 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.421985 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.422411 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.425094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.425168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.425183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.509528 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:53:43.867135268 +0000 UTC Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.669341 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.669431 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.671349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.671412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.671437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.671989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.672046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:39 crc kubenswrapper[4867]: I0126 11:17:39.672068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:40 crc kubenswrapper[4867]: I0126 11:17:40.509859 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:59:46.680225731 +0000 UTC Jan 26 11:17:40 crc kubenswrapper[4867]: E0126 11:17:40.677882 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 11:17:41 crc kubenswrapper[4867]: I0126 11:17:41.269336 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 11:17:41 crc kubenswrapper[4867]: I0126 11:17:41.269540 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:41 crc kubenswrapper[4867]: I0126 11:17:41.270778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:41 crc kubenswrapper[4867]: I0126 11:17:41.270804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:41 crc kubenswrapper[4867]: I0126 11:17:41.270813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:41 crc kubenswrapper[4867]: I0126 11:17:41.510149 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:31:13.828893143 +0000 UTC Jan 26 11:17:42 crc kubenswrapper[4867]: I0126 11:17:42.057831 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 11:17:42 crc kubenswrapper[4867]: I0126 11:17:42.057971 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:17:42 crc kubenswrapper[4867]: I0126 11:17:42.510325 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:01:50.652865977 +0000 UTC Jan 26 11:17:43 crc kubenswrapper[4867]: I0126 11:17:43.498542 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 11:17:43 crc kubenswrapper[4867]: I0126 11:17:43.512364 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:08:00.014236671 +0000 UTC Jan 26 11:17:43 crc kubenswrapper[4867]: E0126 11:17:43.512440 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 26 11:17:43 crc kubenswrapper[4867]: E0126 11:17:43.788060 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 11:17:44 crc kubenswrapper[4867]: W0126 11:17:44.173441 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 11:17:44 crc kubenswrapper[4867]: I0126 11:17:44.173616 4867 trace.go:236] Trace[1426094547]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 11:17:34.171) (total time: 10002ms): Jan 26 11:17:44 crc kubenswrapper[4867]: Trace[1426094547]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:17:44.173) Jan 26 11:17:44 crc kubenswrapper[4867]: Trace[1426094547]: [10.002121834s] [10.002121834s] END Jan 26 11:17:44 crc kubenswrapper[4867]: E0126 11:17:44.173657 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 11:17:44 crc kubenswrapper[4867]: W0126 11:17:44.249666 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 11:17:44 crc kubenswrapper[4867]: I0126 11:17:44.249784 4867 trace.go:236] Trace[1884777489]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 11:17:34.247) (total time: 10002ms): Jan 26 11:17:44 crc kubenswrapper[4867]: Trace[1884777489]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:17:44.249) Jan 26 11:17:44 crc kubenswrapper[4867]: Trace[1884777489]: [10.002031405s] [10.002031405s] END Jan 26 11:17:44 crc kubenswrapper[4867]: E0126 11:17:44.249808 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 11:17:44 crc kubenswrapper[4867]: I0126 11:17:44.512679 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:33:21.743312464 +0000 UTC Jan 26 11:17:44 crc kubenswrapper[4867]: W0126 11:17:44.763661 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 11:17:44 crc kubenswrapper[4867]: I0126 11:17:44.763801 4867 trace.go:236] Trace[1503660477]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 11:17:34.762) (total time: 10001ms): Jan 26 11:17:44 crc kubenswrapper[4867]: Trace[1503660477]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:17:44.763) Jan 26 11:17:44 crc kubenswrapper[4867]: Trace[1503660477]: [10.001355988s] [10.001355988s] END Jan 26 11:17:44 crc kubenswrapper[4867]: E0126 11:17:44.763833 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.066212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.066431 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.067678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.067725 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.067737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:45 crc kubenswrapper[4867]: W0126 11:17:45.150733 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.150825 4867 trace.go:236] Trace[1935957195]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 11:17:35.149) (total time: 10001ms): Jan 26 11:17:45 crc kubenswrapper[4867]: Trace[1935957195]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:17:45.150) Jan 26 11:17:45 crc kubenswrapper[4867]: Trace[1935957195]: [10.00168856s] [10.00168856s] END Jan 26 11:17:45 crc kubenswrapper[4867]: E0126 11:17:45.150847 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.284958 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.285045 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.288963 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.289040 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 11:17:45 crc kubenswrapper[4867]: I0126 11:17:45.512839 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:20:59.68753776 +0000 UTC Jan 26 11:17:46 crc kubenswrapper[4867]: I0126 11:17:46.514671 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:04:00.702327939 +0000 UTC Jan 26 11:17:46 crc kubenswrapper[4867]: I0126 11:17:46.988535 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:46 crc kubenswrapper[4867]: I0126 11:17:46.990305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:46 crc kubenswrapper[4867]: I0126 11:17:46.990403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:46 crc kubenswrapper[4867]: I0126 11:17:46.990423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:46 crc kubenswrapper[4867]: I0126 11:17:46.990465 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 11:17:46 crc kubenswrapper[4867]: E0126 11:17:46.994621 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 11:17:47 crc kubenswrapper[4867]: I0126 11:17:47.515444 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:20:12.248036154 +0000 UTC Jan 26 11:17:47 crc kubenswrapper[4867]: I0126 11:17:47.825093 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.501856 4867 apiserver.go:52] "Watching apiserver" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.506288 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.506693 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.507192 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.507266 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.507306 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:48 crc kubenswrapper[4867]: E0126 11:17:48.507514 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:17:48 crc kubenswrapper[4867]: E0126 11:17:48.507686 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.508004 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:48 crc kubenswrapper[4867]: E0126 11:17:48.508105 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.508176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.508543 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.509290 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.509656 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.510203 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.511356 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.512088 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.512263 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.512416 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.512431 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.514327 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.515738 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:53:41.388772314 +0000 UTC Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.541887 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.555765 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.571708 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.590034 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.603040 4867 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.605085 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.620298 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.632615 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:48 crc kubenswrapper[4867]: I0126 11:17:48.760940 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 11:17:49 crc kubenswrapper[4867]: I0126 11:17:49.303851 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]log ok Jan 26 11:17:49 crc kubenswrapper[4867]: [-]etcd failed: reason withheld Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-apiextensions-informers ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/crd-informer-synced ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/rbac/bootstrap-roles ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/bootstrap-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/apiservice-registration-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]autoregister-completion ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 11:17:49 crc kubenswrapper[4867]: livez check failed Jan 26 11:17:49 crc kubenswrapper[4867]: I0126 11:17:49.303951 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:17:49 crc kubenswrapper[4867]: I0126 11:17:49.516669 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:12:17.201201068 +0000 UTC Jan 26 11:17:49 crc kubenswrapper[4867]: I0126 11:17:49.734783 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.279634 4867 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.286138 4867 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.310818 4867 csr.go:261] certificate signing request csr-8bxs6 is approved, waiting to be issued Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.326011 4867 csr.go:257] certificate signing request csr-8bxs6 is issued Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.343536 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47128->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.343570 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47144->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.343632 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47128->192.168.126.11:17697: read: connection reset by peer" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.343646 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47144->192.168.126.11:17697: read: connection reset by peer" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.377485 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.380994 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381051 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381091 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381149 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381178 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381235 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381259 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381281 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381298 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381314 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381336 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381411 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381428 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381445 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381479 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381470 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381504 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381526 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381503 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381548 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381634 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381729 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381753 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381823 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381842 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381877 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381883 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381897 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.381952 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382036 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382055 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382060 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382112 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382151 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382170 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382188 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382206 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382249 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382265 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382282 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382301 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382350 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382390 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382405 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382419 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382434 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382450 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382463 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382513 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382534 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382620 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382640 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382673 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382689 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382704 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382768 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382783 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382798 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382814 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382828 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382856 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382925 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382947 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.382997 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383013 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383028 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383068 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383083 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383098 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383133 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383153 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383170 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383188 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383206 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383235 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383251 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383267 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383288 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383331 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383349 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383365 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383400 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383420 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383437 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383452 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383489 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383507 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383525 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383541 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383559 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383578 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383598 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383633 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383655 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383670 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383689 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383704 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383720 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383758 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383776 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383798 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383815 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383832 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383850 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383869 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383888 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383910 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383928 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383944 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383999 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384033 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384055 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384073 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384089 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384107 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384125 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384143 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384180 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384197 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384259 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384297 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384335 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384353 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384372 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384390 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384407 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384426 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384446 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384465 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384503 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384523 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384561 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384578 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384595 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384612 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384629 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384647 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384778 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384795 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384813 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384836 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384857 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384874 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384891 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384929 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384991 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.385008 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.385024 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.385040 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.385058 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.385081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383093 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383198 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.392800 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383656 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383781 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.383871 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384205 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384264 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384440 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384501 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.384792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386138 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386179 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386269 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386418 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386558 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386641 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386689 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386799 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.386837 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.387032 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.387141 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.387376 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.387695 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.387726 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.387741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.391046 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.392581 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.392689 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.393454 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.393542 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.393824 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394081 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394101 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394528 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394726 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394708 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.396941 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.397059 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.397596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.397596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.397782 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.399867 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.392606 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.400480 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.400816 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.400883 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.400932 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.400976 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.401017 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.401059 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.401108 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.401244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.402589 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.400876 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.393732 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394526 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.394130 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.401837 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.402520 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.402608 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.402786 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.403183 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.403651 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.403596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.404504 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.404574 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.405196 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.405486 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.405715 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.405760 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.405788 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.405824 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.405948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.406216 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.406368 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.406620 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.406747 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.406978 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.407684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.408433 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.408483 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.408472 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.408527 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.408547 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.409065 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.409543 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.409324 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.409641 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.416584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.416881 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.417037 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.417062 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.417252 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.417462 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.417714 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.417995 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.418144 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.418358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.418637 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.418710 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.418812 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.419018 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.419267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.419558 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.419591 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.420075 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.420180 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.420309 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.420557 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.421378 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.421617 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:17:50.919379442 +0000 UTC m=+20.617954352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.421699 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.421867 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.422366 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.422395 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.421568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.421143 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.422483 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.422174 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.422661 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.422800 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.422876 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.423367 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.423458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.423553 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.423849 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.423880 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.424295 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.424415 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.424532 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.424627 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.424694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.426755 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.426775 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.426913 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.427126 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.427119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.427318 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.427438 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.428503 4867 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430494 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430578 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430648 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430683 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430710 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430784 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430819 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430885 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.430913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431063 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431080 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431093 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431106 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431119 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431134 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431148 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431162 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431177 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431191 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431205 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431243 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431255 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431269 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431279 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431289 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431299 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431310 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431323 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431334 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431342 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431351 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431361 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431371 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431380 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431391 4867 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431415 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431423 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431433 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431442 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431451 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431461 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431471 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431482 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431491 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431501 4867 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431707 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.431839 4867 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.432349 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.432637 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.432807 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.432841 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.432888 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:50.932862996 +0000 UTC m=+20.631437906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.432915 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.432943 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434283 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434519 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.434674 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:50.934661455 +0000 UTC m=+20.633236445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434740 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434760 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434772 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434784 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434800 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434813 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434825 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434838 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434851 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434863 4867 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434881 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434894 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434906 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434918 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434929 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434941 4867 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434953 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434964 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434977 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.434988 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435001 4867 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435012 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435025 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435037 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435054 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435075 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435086 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435099 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435111 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435123 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435136 4867 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435147 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435158 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435172 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435181 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435192 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435203 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435216 4867 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435242 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435254 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435265 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435275 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435286 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435297 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435308 4867 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435321 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435333 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435344 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435363 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435378 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435391 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435403 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435419 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435437 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435449 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435462 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435473 4867 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435490 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435511 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435524 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435543 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435558 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435577 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435590 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435602 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435614 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435625 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435637 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435649 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435659 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435673 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435684 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435694 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435706 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435718 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435728 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435737 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435749 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435760 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435770 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435780 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435792 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435802 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435813 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435823 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435834 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435844 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435854 4867 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435864 4867 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435875 4867 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435886 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435896 4867 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435905 4867 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.438583 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.438970 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.439099 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.439651 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.440404 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.440720 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.440971 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441200 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.435916 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441504 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441518 4867 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441530 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441544 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441556 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441579 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441577 4867 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441625 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441633 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441589 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441872 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.441891 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.442117 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.443819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.446248 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.447048 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: W0126 11:17:50.452110 4867 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.452570 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.452677 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.452809 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.188e43ccd0e444bf\": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.188e43ccd0e444bf default 26188 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 11:17:30 +0000 UTC,LastTimestamp:2026-01-26 11:17:30.868716991 +0000 UTC m=+0.567291901,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.452981 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection" interval="6.4s" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.452747 4867 projected.go:194] Error preparing data for projected volume kube-api-access-rdwmf for pod openshift-network-operator/network-operator-58b4c7f79c-55gtf: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.454116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.452806 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.453363 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.453561 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.453611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.453774 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.453853 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.458492 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.454957 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.457564 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.458581 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.458594 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection, object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.457792 4867 projected.go:194] Error preparing data for projected volume kube-api-access-rczfb for pod openshift-network-operator/iptables-alerter-4ln5h: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.458411 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf podName:37a5e44f-9a88-4405-be8a-b645485e7312 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:50.958380153 +0000 UTC m=+20.656955063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rdwmf" (UniqueName: "kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf") pod "network-operator-58b4c7f79c-55gtf" (UID: "37a5e44f-9a88-4405-be8a-b645485e7312") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.458707 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:50.958689491 +0000 UTC m=+20.657264401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection, object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.458719 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb podName:d75a4c96-2883-4a0b-bab2-0fab2b6c0b49 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:50.958714202 +0000 UTC m=+20.657289112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rczfb" (UniqueName: "kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb") pod "iptables-alerter-4ln5h" (UID: "d75a4c96-2883-4a0b-bab2-0fab2b6c0b49") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": read tcp 38.102.83.115:33470->38.102.83.115:6443: use of closed network connection Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.459827 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.460297 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.460624 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.461209 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.463898 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.467011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.467190 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.467494 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.467648 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.467980 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.470607 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.470810 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.471379 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.471493 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.471579 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.471717 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:50.971688392 +0000 UTC m=+20.670263382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.471787 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.471372 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.471904 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.472302 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.472400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.472445 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.473792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.473935 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.473963 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.474112 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.474166 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.474276 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.474280 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.483583 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.483776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.500835 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.517088 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:33:27.720733758 +0000 UTC Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.519819 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.520585 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.520664 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.523826 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543134 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543290 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543307 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543318 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543331 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543344 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543356 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543368 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543379 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543389 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543400 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543411 4867 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543423 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543437 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543447 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543461 4867 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543474 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543486 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543497 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543513 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543524 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543536 4867 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543549 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543561 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543573 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543585 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543596 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543608 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543619 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543631 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543641 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543652 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543663 4867 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543677 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543688 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543698 4867 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543710 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543726 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543740 4867 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543755 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543767 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543778 4867 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543790 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543802 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543814 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543826 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543838 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543849 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543861 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543874 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543886 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543898 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543911 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.543922 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.544006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.544060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.563116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.563296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.563343 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.563404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.563541 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.563591 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.567732 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.568628 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.570768 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.571805 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.573390 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.574264 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.575243 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.576945 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.577672 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.579123 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.579712 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.580956 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.581521 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.582095 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.583268 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.583983 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.585251 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.585735 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.586366 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.587408 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.587869 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.588969 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.589456 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.590527 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.590995 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.591599 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.592694 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.593171 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.594131 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.594675 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.595750 4867 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.595876 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.597648 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.598807 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.599290 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.601361 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.602199 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.603418 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.604085 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.605187 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.605745 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.607370 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.608241 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.609648 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.610371 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.611639 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.612291 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.614312 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.614926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.616374 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.616947 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.618367 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.618986 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.619525 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.620479 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.640973 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 11:17:50 crc kubenswrapper[4867]: W0126 11:17:50.664581 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-92fbc70ca68e734fd8ea3b199808cea54884aba5bd9fc36dc5932464ff3664dc WatchSource:0}: Error finding container 92fbc70ca68e734fd8ea3b199808cea54884aba5bd9fc36dc5932464ff3664dc: Status 404 returned error can't find the container with id 92fbc70ca68e734fd8ea3b199808cea54884aba5bd9fc36dc5932464ff3664dc Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.702192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"92fbc70ca68e734fd8ea3b199808cea54884aba5bd9fc36dc5932464ff3664dc"} Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.704199 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.707127 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390" exitCode=255 Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.707264 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390"} Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.718328 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.718644 4867 scope.go:117] "RemoveContainer" containerID="4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.885594 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.947614 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.947743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:50 crc kubenswrapper[4867]: I0126 11:17:50.947833 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.947923 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:17:51.947881722 +0000 UTC m=+21.646456642 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.947962 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.948104 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:51.948078288 +0000 UTC m=+21.646653198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.947992 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:50 crc kubenswrapper[4867]: E0126 11:17:50.948235 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:51.948196701 +0000 UTC m=+21.646771791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.048799 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.048872 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.048896 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.048943 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.049110 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.049161 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.049182 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.049271 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:52.049247363 +0000 UTC m=+21.747822263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.064043 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.064095 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.064111 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.064195 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:52.064168225 +0000 UTC m=+21.762743135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.069514 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.071989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.224124 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.233667 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 11:17:51 crc kubenswrapper[4867]: W0126 11:17:51.237923 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5e74fa668fd767b7b49998b1e7658fb11f291354398fa9b9e26290ccb1faf60b WatchSource:0}: Error finding container 5e74fa668fd767b7b49998b1e7658fb11f291354398fa9b9e26290ccb1faf60b: Status 404 returned error can't find the container with id 5e74fa668fd767b7b49998b1e7658fb11f291354398fa9b9e26290ccb1faf60b Jan 26 11:17:51 crc kubenswrapper[4867]: W0126 11:17:51.254600 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-2636d652b8395ca4cc6c6d4cb7eda35dc242dbbc5b7c18aced1ca209d2f5cee3 WatchSource:0}: Error finding container 2636d652b8395ca4cc6c6d4cb7eda35dc242dbbc5b7c18aced1ca209d2f5cee3: Status 404 returned error can't find the container with id 2636d652b8395ca4cc6c6d4cb7eda35dc242dbbc5b7c18aced1ca209d2f5cee3 Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.299915 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.317491 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.321122 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.327351 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 11:12:50 +0000 UTC, rotation deadline is 2026-11-27 07:35:48.260667194 +0000 UTC Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.327404 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7316h17m56.933264766s for next certificate rotation Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.472911 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.495891 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.512293 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.517716 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:52:59.299684423 +0000 UTC Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.524299 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.545643 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.563253 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.588726 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.634069 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wmdmh"] Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.634512 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.637142 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.638361 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.639460 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.652899 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.712421 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.714397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81"} Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.714715 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.716627 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4"} Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.716713 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b"} Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.717702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2636d652b8395ca4cc6c6d4cb7eda35dc242dbbc5b7c18aced1ca209d2f5cee3"} Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.719196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774"} Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.719298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5e74fa668fd767b7b49998b1e7658fb11f291354398fa9b9e26290ccb1faf60b"} Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.723067 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.755537 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6hvh\" (UniqueName: \"kubernetes.io/projected/6ad862fa-4af9-49f7-a629-ebf54a83ca45-kube-api-access-b6hvh\") pod \"node-resolver-wmdmh\" (UID: \"6ad862fa-4af9-49f7-a629-ebf54a83ca45\") " pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.755721 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6ad862fa-4af9-49f7-a629-ebf54a83ca45-hosts-file\") pod \"node-resolver-wmdmh\" (UID: \"6ad862fa-4af9-49f7-a629-ebf54a83ca45\") " pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.771263 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.804806 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.845798 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.856363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6hvh\" (UniqueName: \"kubernetes.io/projected/6ad862fa-4af9-49f7-a629-ebf54a83ca45-kube-api-access-b6hvh\") pod \"node-resolver-wmdmh\" (UID: \"6ad862fa-4af9-49f7-a629-ebf54a83ca45\") " pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.856479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6ad862fa-4af9-49f7-a629-ebf54a83ca45-hosts-file\") pod \"node-resolver-wmdmh\" (UID: \"6ad862fa-4af9-49f7-a629-ebf54a83ca45\") " pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.856594 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6ad862fa-4af9-49f7-a629-ebf54a83ca45-hosts-file\") pod \"node-resolver-wmdmh\" (UID: \"6ad862fa-4af9-49f7-a629-ebf54a83ca45\") " pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.870949 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.880047 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6hvh\" (UniqueName: \"kubernetes.io/projected/6ad862fa-4af9-49f7-a629-ebf54a83ca45-kube-api-access-b6hvh\") pod \"node-resolver-wmdmh\" (UID: \"6ad862fa-4af9-49f7-a629-ebf54a83ca45\") " pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.898844 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.916315 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.933602 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.950294 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wmdmh" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.957807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.957911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.957959 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.958097 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.958105 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:17:53.958055469 +0000 UTC m=+23.656630389 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.958178 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.958205 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:53.958183463 +0000 UTC m=+23.656758443 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: E0126 11:17:51.958394 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:53.958359907 +0000 UTC m=+23.656934867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.973717 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:51 crc kubenswrapper[4867]: I0126 11:17:51.992851 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:51Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.020563 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.035062 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.058208 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.058428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.058792 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.061541 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.061559 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.061630 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:54.061607189 +0000 UTC m=+23.760182109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.077840 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.092386 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.162621 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.162896 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.162950 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.162970 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.163069 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:54.163037693 +0000 UTC m=+23.861612683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.518213 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:21:02.845849009 +0000 UTC Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.563866 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.563905 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.563954 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.564059 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.564257 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:17:52 crc kubenswrapper[4867]: E0126 11:17:52.564456 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.582860 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-g6cth"] Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.583317 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.588309 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.588616 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8ngn"] Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.588675 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.589370 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hn8xr"] Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.589601 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.589619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.590135 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9fjlf"] Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.593770 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.594856 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.595404 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.595566 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.595638 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.595701 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.595562 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.595920 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.596124 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.596142 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.596338 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.596480 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.596626 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.596645 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.597885 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.601159 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.601302 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.601453 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.617027 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.632553 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.645905 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.661948 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.666903 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-cnibin\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.666952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-multus-certs\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.666990 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-hostroot\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667009 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-os-release\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667027 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cni-binary-copy\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667042 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-netns\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667146 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-kubelet\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667213 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-etc-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667263 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-bin\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667333 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-system-cni-dir\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667365 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhrdq\" (UniqueName: \"kubernetes.io/projected/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-kube-api-access-rhrdq\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667391 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-cni-binary-copy\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667414 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-var-lib-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-node-log\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667460 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-script-lib\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/115cad9f-057f-4e63-b408-8fa7a358a191-mcd-auth-proxy-config\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667501 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-ovn\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667536 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-kubelet\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667560 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-cni-bin\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667580 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-cni-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667629 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-socket-dir-parent\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-k8s-cni-cncf-io\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667669 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-system-cni-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/115cad9f-057f-4e63-b408-8fa7a358a191-proxy-tls\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667703 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667721 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/115cad9f-057f-4e63-b408-8fa7a358a191-rootfs\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667741 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-env-overrides\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667826 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-os-release\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667861 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-conf-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667882 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-systemd\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.667974 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668040 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cnibin\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668083 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-etc-kubernetes\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668126 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-slash\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668163 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-netns\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668189 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrjnj\" (UniqueName: \"kubernetes.io/projected/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-kube-api-access-xrjnj\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668245 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wqzf\" (UniqueName: \"kubernetes.io/projected/115cad9f-057f-4e63-b408-8fa7a358a191-kube-api-access-4wqzf\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-log-socket\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668327 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovn-node-metrics-cert\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668352 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-cni-multus\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-netd\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668421 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn6f7\" (UniqueName: \"kubernetes.io/projected/4a3be637-cf04-4c55-bf72-67fdad83cc44-kube-api-access-cn6f7\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668462 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-daemon-config\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-systemd-units\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.668506 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-config\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.677550 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.689334 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.705364 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.719624 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.723424 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wmdmh" event={"ID":"6ad862fa-4af9-49f7-a629-ebf54a83ca45","Type":"ContainerStarted","Data":"726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc"} Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.723497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wmdmh" event={"ID":"6ad862fa-4af9-49f7-a629-ebf54a83ca45","Type":"ContainerStarted","Data":"a0901de638708220b151000410e645e7695286d39b83d70f81ba3794a3daecff"} Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.730863 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.754794 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.767769 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.768986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-system-cni-dir\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769017 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-var-lib-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769035 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-node-log\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769054 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-script-lib\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhrdq\" (UniqueName: \"kubernetes.io/projected/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-kube-api-access-rhrdq\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-cni-binary-copy\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769131 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-kubelet\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769151 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/115cad9f-057f-4e63-b408-8fa7a358a191-mcd-auth-proxy-config\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769165 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-ovn\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769181 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-cni-bin\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-cni-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-socket-dir-parent\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-k8s-cni-cncf-io\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769304 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-system-cni-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769357 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/115cad9f-057f-4e63-b408-8fa7a358a191-proxy-tls\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-os-release\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-conf-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769420 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-cni-bin\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-node-log\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769470 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/115cad9f-057f-4e63-b408-8fa7a358a191-rootfs\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769516 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-k8s-cni-cncf-io\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769537 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-system-cni-dir\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-var-lib-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769637 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-cni-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/115cad9f-057f-4e63-b408-8fa7a358a191-rootfs\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769860 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769906 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.769947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-kubelet\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770002 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-socket-dir-parent\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770170 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-os-release\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770405 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-conf-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-env-overrides\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770452 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-etc-kubernetes\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770515 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-slash\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-netns\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770569 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-systemd\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770615 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-cni-binary-copy\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770628 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cnibin\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-ovn\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770704 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cnibin\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770725 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-etc-kubernetes\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770767 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-log-socket\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770816 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovn-node-metrics-cert\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770837 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-system-cni-dir\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770844 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-script-lib\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/115cad9f-057f-4e63-b408-8fa7a358a191-mcd-auth-proxy-config\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770872 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-log-socket\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770888 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrjnj\" (UniqueName: \"kubernetes.io/projected/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-kube-api-access-xrjnj\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wqzf\" (UniqueName: \"kubernetes.io/projected/115cad9f-057f-4e63-b408-8fa7a358a191-kube-api-access-4wqzf\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770919 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770892 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-slash\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-netd\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770978 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn6f7\" (UniqueName: \"kubernetes.io/projected/4a3be637-cf04-4c55-bf72-67fdad83cc44-kube-api-access-cn6f7\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.770986 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-netns\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-cni-multus\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-daemon-config\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771137 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-systemd-units\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771153 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-config\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771184 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-var-lib-cni-multus\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771286 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-env-overrides\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771313 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-systemd-units\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-systemd\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771350 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-netd\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771380 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-multus-certs\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771412 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-cnibin\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-multus-certs\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-hostroot\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-os-release\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-os-release\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771720 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-cnibin\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-hostroot\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771886 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-config\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.771883 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-multus-daemon-config\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-kubelet\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772045 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-etc-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-bin\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772119 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cni-binary-copy\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-netns\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-host-run-netns\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-etc-openvswitch\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772545 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-bin\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.772589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-kubelet\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.773167 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-cni-binary-copy\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.776525 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovn-node-metrics-cert\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.776535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/115cad9f-057f-4e63-b408-8fa7a358a191-proxy-tls\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.787398 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wqzf\" (UniqueName: \"kubernetes.io/projected/115cad9f-057f-4e63-b408-8fa7a358a191-kube-api-access-4wqzf\") pod \"machine-config-daemon-g6cth\" (UID: \"115cad9f-057f-4e63-b408-8fa7a358a191\") " pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.788694 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.789100 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrjnj\" (UniqueName: \"kubernetes.io/projected/dc37e5d1-ba44-4a54-ac36-ab7cdef17212-kube-api-access-xrjnj\") pod \"multus-hn8xr\" (UID: \"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\") " pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.790781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn6f7\" (UniqueName: \"kubernetes.io/projected/4a3be637-cf04-4c55-bf72-67fdad83cc44-kube-api-access-cn6f7\") pod \"ovnkube-node-p8ngn\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.790801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhrdq\" (UniqueName: \"kubernetes.io/projected/d0cb57c7-fd32-41c2-b873-a3f017b9f1b1-kube-api-access-rhrdq\") pod \"multus-additional-cni-plugins-9fjlf\" (UID: \"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\") " pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.803290 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.818958 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.833011 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.846370 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.864605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.880620 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.893071 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.901513 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.918736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hn8xr" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.924313 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.928062 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:17:52 crc kubenswrapper[4867]: W0126 11:17:52.935608 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc37e5d1_ba44_4a54_ac36_ab7cdef17212.slice/crio-557ea1b2b26b4edfa13a10db907b9ace93d262f0f1bf1cefbefc3f040ae85e6f WatchSource:0}: Error finding container 557ea1b2b26b4edfa13a10db907b9ace93d262f0f1bf1cefbefc3f040ae85e6f: Status 404 returned error can't find the container with id 557ea1b2b26b4edfa13a10db907b9ace93d262f0f1bf1cefbefc3f040ae85e6f Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.939123 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.940630 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" Jan 26 11:17:52 crc kubenswrapper[4867]: W0126 11:17:52.944120 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3be637_cf04_4c55_bf72_67fdad83cc44.slice/crio-affdb9c7b34311edbd1f42a36193128c48cd80d81f59cdb5272ed04455ca22e4 WatchSource:0}: Error finding container affdb9c7b34311edbd1f42a36193128c48cd80d81f59cdb5272ed04455ca22e4: Status 404 returned error can't find the container with id affdb9c7b34311edbd1f42a36193128c48cd80d81f59cdb5272ed04455ca22e4 Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.951433 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.974672 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:52 crc kubenswrapper[4867]: I0126 11:17:52.991703 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:52Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.007935 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:53Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.395067 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.397404 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.397522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.397539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.397676 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.412906 4867 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.413075 4867 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.414768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.414817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.414826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.414846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.414862 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.433875 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:53Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.438015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.438073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.438085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.438110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.438125 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.455176 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:53Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.459046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.459083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.459095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.459116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.459128 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.473125 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:53Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.477361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.477440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.477458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.477487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.477504 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.494700 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:53Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.500656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.500709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.500719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.500741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.500754 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.515821 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:53Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.516003 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.518547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.518594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.518609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.518510 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:16:42.701549368 +0000 UTC Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.518629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.518732 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.621873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.621927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.621939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.621958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.621971 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.724715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.724757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.724767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.724787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.724797 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.726976 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerStarted","Data":"557ea1b2b26b4edfa13a10db907b9ace93d262f0f1bf1cefbefc3f040ae85e6f"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.727992 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"684d825a0f131216cfea89e68d2ecb062d26c0e4d50ff1ba664814bfe6fa1836"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.728860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"affdb9c7b34311edbd1f42a36193128c48cd80d81f59cdb5272ed04455ca22e4"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.730912 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerStarted","Data":"a370df77f33854b54fb973d12b3ae4214ecd913ad01017b4a0cba5077f76f9ff"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.828905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.828968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.828983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.829009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.829025 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.932817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.932867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.932880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.932900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.932913 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:53Z","lastTransitionTime":"2026-01-26T11:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.989276 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.989421 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:53 crc kubenswrapper[4867]: I0126 11:17:53.989479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.989557 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:17:57.989528195 +0000 UTC m=+27.688103105 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.989620 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.989665 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.989706 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:57.989684149 +0000 UTC m=+27.688259059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:53 crc kubenswrapper[4867]: E0126 11:17:53.989728 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:57.98971844 +0000 UTC m=+27.688293350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.035155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.035212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.035246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.035266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.035282 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.090394 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.090611 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.090642 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.090656 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.090719 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:58.090697011 +0000 UTC m=+27.789271921 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.130760 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nxkwj"] Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.131390 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.134543 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.134762 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.134778 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.137388 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.140186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.140241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.140251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.140268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.140279 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.149542 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.167502 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.192084 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.192172 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/53ae46dc-30ef-4dfd-b80e-bacd7542634f-serviceca\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.192200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5h2x\" (UniqueName: \"kubernetes.io/projected/53ae46dc-30ef-4dfd-b80e-bacd7542634f-kube-api-access-b5h2x\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.192334 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53ae46dc-30ef-4dfd-b80e-bacd7542634f-host\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.192450 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.192511 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.192535 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.192635 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:17:58.192606566 +0000 UTC m=+27.891181656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.195064 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.211937 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.227735 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.242731 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.243058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.243115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.243125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.243145 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.243157 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.256658 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.272569 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.287173 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.293729 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/53ae46dc-30ef-4dfd-b80e-bacd7542634f-serviceca\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.293770 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5h2x\" (UniqueName: \"kubernetes.io/projected/53ae46dc-30ef-4dfd-b80e-bacd7542634f-kube-api-access-b5h2x\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.293816 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53ae46dc-30ef-4dfd-b80e-bacd7542634f-host\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.293884 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/53ae46dc-30ef-4dfd-b80e-bacd7542634f-host\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.294903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/53ae46dc-30ef-4dfd-b80e-bacd7542634f-serviceca\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.307684 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.321784 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5h2x\" (UniqueName: \"kubernetes.io/projected/53ae46dc-30ef-4dfd-b80e-bacd7542634f-kube-api-access-b5h2x\") pod \"node-ca-nxkwj\" (UID: \"53ae46dc-30ef-4dfd-b80e-bacd7542634f\") " pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.329895 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.346446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.346510 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.346524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.346449 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.346568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.346902 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.360151 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.375373 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.397992 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.447743 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nxkwj" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.450527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.450562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.450571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.450589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.450601 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.521203 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:53:23.61813258 +0000 UTC Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.554412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.554446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.554455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.554476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.554489 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.562959 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.563065 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.563156 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.563250 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.563559 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:17:54 crc kubenswrapper[4867]: E0126 11:17:54.563727 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.659428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.659473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.659484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.659504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.659516 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.738739 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerStarted","Data":"519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.743453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nxkwj" event={"ID":"53ae46dc-30ef-4dfd-b80e-bacd7542634f","Type":"ContainerStarted","Data":"0c0d4f3a814beeb265539223dc53bf0dc9c38ccd2897e9ca9e4fb309f87b78b3"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.747156 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b" exitCode=0 Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.748033 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.752046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.755827 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.756062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.756106 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.758354 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0cb57c7-fd32-41c2-b873-a3f017b9f1b1" containerID="49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556" exitCode=0 Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.758422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerDied","Data":"49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.761435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.761485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.761501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.761520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.761535 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.773468 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.795408 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.816953 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.835232 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.849849 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.863938 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.868466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.868559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.868576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.868635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.868652 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.878328 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.890023 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.977602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.977654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.977663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.977684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.977697 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:54Z","lastTransitionTime":"2026-01-26T11:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:54 crc kubenswrapper[4867]: I0126 11:17:54.992692 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.009314 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.025598 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.041315 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.055739 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.076958 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.082262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.082305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.082315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.082332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.082346 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.091954 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.103172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.121301 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.137893 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.159494 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.181084 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.187924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.187983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.187995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.188016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.188029 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.198739 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.217185 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.232700 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.252890 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.272951 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.287457 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.290547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.290579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.290590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.290609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.290624 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.303648 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.318084 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.336781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.396016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.396075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.396103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.396134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.396147 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.500019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.500278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.500360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.500449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.500511 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.521945 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:23:42.48003809 +0000 UTC Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.602714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.602774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.602789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.602810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.602825 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.706397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.706452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.706466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.706517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.706534 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.764707 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0cb57c7-fd32-41c2-b873-a3f017b9f1b1" containerID="de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c" exitCode=0 Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.764776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerDied","Data":"de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.766312 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nxkwj" event={"ID":"53ae46dc-30ef-4dfd-b80e-bacd7542634f","Type":"ContainerStarted","Data":"a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.770557 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.770596 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.770611 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.770621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.770632 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.770643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.784781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.796551 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.809524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.809558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.809570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.809587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.809599 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.816924 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.837600 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.853612 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.866691 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.883111 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.897567 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.911487 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.912417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.912462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.912475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.912496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.912509 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:55Z","lastTransitionTime":"2026-01-26T11:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.924942 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.941046 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.952134 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.966327 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.978127 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:55 crc kubenswrapper[4867]: I0126 11:17:55.997290 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.011297 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.022175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.022256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.022272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.022294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.022306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.027541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.040445 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.053543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.066307 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.076436 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.087468 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.102723 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.113487 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.124683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.124721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.124729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.124744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.124755 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.131588 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.143694 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.157572 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.169989 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.180005 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.200161 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.227293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.227612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.227702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.227793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.227919 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.331023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.331077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.331089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.331107 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.331119 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.433862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.433951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.433979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.434013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.434038 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.522432 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:39:01.275850008 +0000 UTC Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.537048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.537131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.537160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.537190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.537209 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.563560 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.563573 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:56 crc kubenswrapper[4867]: E0126 11:17:56.563781 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.563602 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:56 crc kubenswrapper[4867]: E0126 11:17:56.563921 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:17:56 crc kubenswrapper[4867]: E0126 11:17:56.564003 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.640737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.640821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.640851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.640887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.640917 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.744265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.744339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.744353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.744424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.744440 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.776367 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0cb57c7-fd32-41c2-b873-a3f017b9f1b1" containerID="f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f" exitCode=0 Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.776481 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerDied","Data":"f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.799201 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.822983 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.846371 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.848371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.848422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.848432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.848452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.848494 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.863721 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.880763 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.895491 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.908535 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.926893 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.942449 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.951037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.951065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.951078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.951096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.951108 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:56Z","lastTransitionTime":"2026-01-26T11:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.963380 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.979077 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:56 crc kubenswrapper[4867]: I0126 11:17:56.995329 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:56Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.008277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.020292 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.044340 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.054714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.054758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.054769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.054786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.054796 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.158248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.159050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.159128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.159250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.159335 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.262795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.262858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.262875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.262903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.262921 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.366536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.366594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.366607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.366629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.366643 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.470390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.470446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.470463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.470485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.470498 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.522631 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:33:04.468069872 +0000 UTC Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.574366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.574426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.574439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.574454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.574465 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.678074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.678137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.678157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.678186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.678204 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.781059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.781526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.781542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.781564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.781577 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.784064 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0cb57c7-fd32-41c2-b873-a3f017b9f1b1" containerID="5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862" exitCode=0 Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.784135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerDied","Data":"5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.802654 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.824843 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.843782 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.858089 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.874830 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.883810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.883839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.883849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.883873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.883885 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.889550 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.902162 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.918669 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.934662 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:57Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.987367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.987417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.987432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.987457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:57 crc kubenswrapper[4867]: I0126 11:17:57.987473 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:57Z","lastTransitionTime":"2026-01-26T11:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.007709 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.022150 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.035437 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.047426 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.047613 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.047663 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.047698 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:18:06.047660515 +0000 UTC m=+35.746235425 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.047798 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.047821 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.047862 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:06.0478432 +0000 UTC m=+35.746418110 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.047896 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:06.047888221 +0000 UTC m=+35.746463121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.047940 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.059399 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.078707 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.091596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.091666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.091681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.091704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.091718 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.149318 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.149552 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.149575 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.149589 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.149658 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:06.149638073 +0000 UTC m=+35.848212983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.195132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.195176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.195189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.195212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.195279 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.251666 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.252146 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.252179 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.252197 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.252301 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:06.252280218 +0000 UTC m=+35.950855138 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.298415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.298473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.298487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.298509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.298523 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.402321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.402368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.402386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.402409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.402422 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.505234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.505285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.505301 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.505321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.505333 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.522859 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:34:05.852887857 +0000 UTC Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.566018 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.566174 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.566653 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.566726 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.566833 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:17:58 crc kubenswrapper[4867]: E0126 11:17:58.566887 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.607873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.607911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.607920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.607935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.607945 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.712388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.712447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.712464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.712487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.712503 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.789583 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0cb57c7-fd32-41c2-b873-a3f017b9f1b1" containerID="3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac" exitCode=0 Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.789623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerDied","Data":"3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.795475 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.809541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.815434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.815495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.815509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.815537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.815552 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.828258 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.842566 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.867323 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.881788 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.897467 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.911774 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.927214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.927290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.927308 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.927331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.927347 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:58Z","lastTransitionTime":"2026-01-26T11:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.927801 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.945605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.960347 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.976560 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:58 crc kubenswrapper[4867]: I0126 11:17:58.992561 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:58Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.006136 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:59Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.027201 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:59Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.030160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.030203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.030214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.030252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.030265 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.044290 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:17:59Z is after 2025-08-24T17:21:41Z" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.137802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.137858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.137871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.137896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.137911 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.241012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.241067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.241080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.241104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.241120 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.345250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.345356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.345371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.345394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.345414 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.449194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.449305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.449321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.449344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.449360 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.523676 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:48:23.658195024 +0000 UTC Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.552075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.552135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.552147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.552164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.552175 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.654696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.654790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.654816 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.654853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.654919 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.758675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.758743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.758763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.758786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.758806 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.863174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.863274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.863293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.863317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.863341 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.967920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.967996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.968015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.968043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:17:59 crc kubenswrapper[4867]: I0126 11:17:59.968062 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:17:59Z","lastTransitionTime":"2026-01-26T11:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.072157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.072266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.072296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.072328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.072354 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.176357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.176430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.176453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.176487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.176509 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.280568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.280644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.280667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.280701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.280724 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.384821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.384897 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.384922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.384957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.384984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.488992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.489081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.489106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.489146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.489173 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.524502 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:04:54.579740408 +0000 UTC Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.563532 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.563558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:00 crc kubenswrapper[4867]: E0126 11:18:00.563812 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:00 crc kubenswrapper[4867]: E0126 11:18:00.564273 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.564524 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:00 crc kubenswrapper[4867]: E0126 11:18:00.564825 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.592074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.592126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.592141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.592164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.592183 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.600027 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.616520 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.639853 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.660661 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.677093 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.694069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.694113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.694125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.694149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.694166 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.705563 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.722139 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.736996 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.749804 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.765182 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.778456 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.790491 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.796736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.796784 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.796795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.796818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.796832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.805422 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.827846 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.838758 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.904287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.904334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.904345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.904365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:00 crc kubenswrapper[4867]: I0126 11:18:00.904384 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:00Z","lastTransitionTime":"2026-01-26T11:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.007518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.007589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.007609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.007637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.007661 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.110451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.110709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.110772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.110848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.110922 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.214287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.214708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.214793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.214900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.215004 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.318297 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.318359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.318378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.318411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.318435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.421633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.421732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.421763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.421799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.421822 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.524857 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:45:14.947828164 +0000 UTC Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.525884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.525974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.525997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.526026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.526049 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.629571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.629987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.630006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.630029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.630042 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.651593 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.733884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.733921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.733933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.733952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.733965 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.813904 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerStarted","Data":"e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.837109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.837149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.837161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.837180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.837194 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.941279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.941339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.941357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.941377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:01 crc kubenswrapper[4867]: I0126 11:18:01.941391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:01Z","lastTransitionTime":"2026-01-26T11:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.044844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.044935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.044962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.044994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.045016 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.148280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.148350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.148370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.148400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.148425 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.251702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.251763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.251780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.251808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.251849 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.355336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.355414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.355440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.355471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.355495 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.459422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.459518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.459574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.459610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.459634 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.525502 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:40:47.929374955 +0000 UTC Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.562737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.562834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.562861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:02 crc kubenswrapper[4867]: E0126 11:18:02.563019 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.562872 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.562877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.563125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.563137 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.563144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: E0126 11:18:02.563345 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:02 crc kubenswrapper[4867]: E0126 11:18:02.563431 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.666408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.666459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.666470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.666487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.666499 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.769123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.769199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.769250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.769284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.769306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.873326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.873402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.873421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.873455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.873476 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.978005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.978098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.978120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.978151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:02 crc kubenswrapper[4867]: I0126 11:18:02.978172 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:02Z","lastTransitionTime":"2026-01-26T11:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.082337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.082403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.082419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.082447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.082467 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.186312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.186415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.186456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.186494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.186519 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.295297 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.295365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.295388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.295413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.295435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.399070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.399144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.399162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.399193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.399274 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.503514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.503590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.503608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.503636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.503655 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.526546 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:17:43.383466644 +0000 UTC Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.577209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.577297 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.577311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.577339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.577354 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: E0126 11:18:03.595034 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.600754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.600829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.600848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.600880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.600987 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: E0126 11:18:03.619674 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.624577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.624625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.624639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.624662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.624674 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: E0126 11:18:03.643929 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.702134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.702216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.702286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.702315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.702336 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: E0126 11:18:03.718745 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.725053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.725115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.725143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.725176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.725201 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: E0126 11:18:03.741851 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:03 crc kubenswrapper[4867]: E0126 11:18:03.742017 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.744561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.744610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.744632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.744660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.744680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.847188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.847244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.847257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.847274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.847286 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.951847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.951922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.951969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.952004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:03 crc kubenswrapper[4867]: I0126 11:18:03.952026 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:03Z","lastTransitionTime":"2026-01-26T11:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.055871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.055934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.055943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.055963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.055974 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.159019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.159074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.159084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.159109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.159120 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.262593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.262652 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.262669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.262697 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.262716 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.365557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.365617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.365636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.365663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.365681 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.469853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.469927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.469961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.469989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.470007 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.526840 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:42:51.291685529 +0000 UTC Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.563696 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.563941 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:04 crc kubenswrapper[4867]: E0126 11:18:04.564079 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.564309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:04 crc kubenswrapper[4867]: E0126 11:18:04.564373 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:04 crc kubenswrapper[4867]: E0126 11:18:04.564399 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.573087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.573135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.573150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.573169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.573207 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.677568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.677663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.677689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.677730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.677757 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.781118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.781186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.781208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.781263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.781286 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.884708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.885294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.885523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.885728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.885920 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.989260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.989322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.989338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.989360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:04 crc kubenswrapper[4867]: I0126 11:18:04.989376 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:04Z","lastTransitionTime":"2026-01-26T11:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.093141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.093190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.093207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.093267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.093287 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.196400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.196461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.196473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.196495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.196509 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.284285 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt"] Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.285246 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.287692 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.288252 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.300199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.300711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.300883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.301039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.301216 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.301527 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.314939 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.350196 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.367039 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.404698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.404757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.404775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.404802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.404820 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.405878 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.422322 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.445325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf285485-1027-4bdc-bdfa-934ef32e7f5e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.445376 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhnr\" (UniqueName: \"kubernetes.io/projected/cf285485-1027-4bdc-bdfa-934ef32e7f5e-kube-api-access-kzhnr\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.445404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf285485-1027-4bdc-bdfa-934ef32e7f5e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.445446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf285485-1027-4bdc-bdfa-934ef32e7f5e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.456475 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.469431 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.482206 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.492954 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.507886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.507938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.507952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.507974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.507990 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.509928 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.527882 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:16:21.523661265 +0000 UTC Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.530553 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.547196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf285485-1027-4bdc-bdfa-934ef32e7f5e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.547361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf285485-1027-4bdc-bdfa-934ef32e7f5e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.547533 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf285485-1027-4bdc-bdfa-934ef32e7f5e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.547599 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzhnr\" (UniqueName: \"kubernetes.io/projected/cf285485-1027-4bdc-bdfa-934ef32e7f5e-kube-api-access-kzhnr\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.548979 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf285485-1027-4bdc-bdfa-934ef32e7f5e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.549059 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf285485-1027-4bdc-bdfa-934ef32e7f5e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.551755 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.555051 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf285485-1027-4bdc-bdfa-934ef32e7f5e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.566927 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.575085 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzhnr\" (UniqueName: \"kubernetes.io/projected/cf285485-1027-4bdc-bdfa-934ef32e7f5e-kube-api-access-kzhnr\") pod \"ovnkube-control-plane-749d76644c-nbvlt\" (UID: \"cf285485-1027-4bdc-bdfa-934ef32e7f5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.581678 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.601992 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.611490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.611529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.611540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.611555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.611566 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.614042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" Jan 26 11:18:05 crc kubenswrapper[4867]: W0126 11:18:05.631578 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf285485_1027_4bdc_bdfa_934ef32e7f5e.slice/crio-2d3d453620ecc4f875816e3f85568bcd109d703d08b479d0574e4c9131422630 WatchSource:0}: Error finding container 2d3d453620ecc4f875816e3f85568bcd109d703d08b479d0574e4c9131422630: Status 404 returned error can't find the container with id 2d3d453620ecc4f875816e3f85568bcd109d703d08b479d0574e4c9131422630 Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.713847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.713898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.713915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.713938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.713955 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.816749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.816783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.816795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.816814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.816829 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.829763 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" event={"ID":"cf285485-1027-4bdc-bdfa-934ef32e7f5e","Type":"ContainerStarted","Data":"2d3d453620ecc4f875816e3f85568bcd109d703d08b479d0574e4c9131422630"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.845792 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.856890 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.873312 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.887185 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.898680 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.909441 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.919811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.919843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.919852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.919868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.919880 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:05Z","lastTransitionTime":"2026-01-26T11:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.921836 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.933682 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.952330 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.968076 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:05 crc kubenswrapper[4867]: I0126 11:18:05.984924 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.001116 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.011947 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.022272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.022332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.022346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.022367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.022381 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.029085 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.041662 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.053769 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:18:22.053730237 +0000 UTC m=+51.752305187 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.053879 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.054788 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.055129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.055422 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.055581 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:22.055548487 +0000 UTC m=+51.754123407 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.055673 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.055868 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:22.055841934 +0000 UTC m=+51.754417084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.078460 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.125031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.125084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.125099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.125119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.125132 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.156281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.156574 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.156624 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.156648 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.156748 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:22.156721572 +0000 UTC m=+51.855296522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.229342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.229411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.229430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.229459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.229477 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.257290 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.257652 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.257727 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.257758 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.257894 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:22.257852517 +0000 UTC m=+51.956427587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.333524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.333590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.333606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.333630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.333648 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.437334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.437409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.437439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.437489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.437517 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.528535 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:59:47.73915965 +0000 UTC Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.541277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.541347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.541370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.541399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.541421 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.564086 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.564286 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.564365 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.564422 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.564603 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.564806 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.645053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.645126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.645164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.645207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.645270 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.748075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.748119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.748130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.748147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.748163 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.794022 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nmdmx"] Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.794748 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.794842 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.811096 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.835941 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.839548 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0cb57c7-fd32-41c2-b873-a3f017b9f1b1" containerID="e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780" exitCode=0 Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.839639 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerDied","Data":"e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.845917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.850547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.850574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.850583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.850598 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.850610 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.863249 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcgj\" (UniqueName: \"kubernetes.io/projected/ed024510-edc6-4306-b54b-63facba64419-kube-api-access-lvcgj\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.863311 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.874196 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.905441 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.919852 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.933160 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.952717 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.954789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.954839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.954850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.954871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.954886 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:06Z","lastTransitionTime":"2026-01-26T11:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.964746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.964899 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcgj\" (UniqueName: \"kubernetes.io/projected/ed024510-edc6-4306-b54b-63facba64419-kube-api-access-lvcgj\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.964983 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: E0126 11:18:06.965088 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:07.465058891 +0000 UTC m=+37.163633791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.976903 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.991334 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcgj\" (UniqueName: \"kubernetes.io/projected/ed024510-edc6-4306-b54b-63facba64419-kube-api-access-lvcgj\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:06 crc kubenswrapper[4867]: I0126 11:18:06.992353 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:06Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.006731 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.057823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.057885 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.057898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.057919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.057937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.078520 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.103129 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.115831 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.128173 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.150295 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.161260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.161334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.161355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.161387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.161409 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.163924 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.177167 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.204533 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.217265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.230511 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.246195 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.260566 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.263289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.263331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.263344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.263361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.263372 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.284437 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.301071 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.319142 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.337634 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.353358 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.365793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.365851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.365866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.365897 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.365947 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.368695 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.382448 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.392505 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.403981 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.419777 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.432910 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.446321 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.468872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.468967 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.468986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.469195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.469207 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.469805 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:07 crc kubenswrapper[4867]: E0126 11:18:07.469954 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:07 crc kubenswrapper[4867]: E0126 11:18:07.470019 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:08.470001407 +0000 UTC m=+38.168576317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.529533 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:00:03.86936306 +0000 UTC Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.571124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.571180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.571192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.571240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.571256 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.674029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.674096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.674115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.674141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.674161 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.777692 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.777764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.777784 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.777812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.777831 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.854505 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" event={"ID":"cf285485-1027-4bdc-bdfa-934ef32e7f5e","Type":"ContainerStarted","Data":"764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.854878 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.854926 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.871028 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.883536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.883625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.883678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.883768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.883806 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.892731 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.908921 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.912309 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.923828 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.940595 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.956954 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.971456 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.987467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.987516 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.987528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.987550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.987564 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:07Z","lastTransitionTime":"2026-01-26T11:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:07 crc kubenswrapper[4867]: I0126 11:18:07.993063 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:07Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.012156 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.028164 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.044844 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.099684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.099773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.099787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.099810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.099825 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.102313 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.118882 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.137568 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.139836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.144406 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.154879 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.168732 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.199335 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.202720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.202758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.202770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.202794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.202809 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.216328 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.231712 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.249865 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.264012 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.275380 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.306300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.306361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.306373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.306396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.306409 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.310194 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.328709 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.347941 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.362720 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.376636 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.409502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.409609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.409621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.409637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.409649 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.409765 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.426166 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.441384 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.454811 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.466543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.479437 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.491120 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.499882 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:08 crc kubenswrapper[4867]: E0126 11:18:08.500026 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:08 crc kubenswrapper[4867]: E0126 11:18:08.500093 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:10.50007343 +0000 UTC m=+40.198648340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.513022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.513076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.513088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.513108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.513123 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.530440 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:03:53.968816487 +0000 UTC Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.563274 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:08 crc kubenswrapper[4867]: E0126 11:18:08.563468 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.563501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:08 crc kubenswrapper[4867]: E0126 11:18:08.563660 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.563756 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.563842 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:08 crc kubenswrapper[4867]: E0126 11:18:08.563868 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:08 crc kubenswrapper[4867]: E0126 11:18:08.564164 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.616244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.616295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.616305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.616322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.616332 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.719303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.719366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.719390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.719412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.719425 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.822966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.823007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.823016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.823244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.823260 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.863657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" event={"ID":"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1","Type":"ContainerStarted","Data":"7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.866982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" event={"ID":"cf285485-1027-4bdc-bdfa-934ef32e7f5e","Type":"ContainerStarted","Data":"9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.882051 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.897113 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.916104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.927438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.927742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.927846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.927918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.927984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:08Z","lastTransitionTime":"2026-01-26T11:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.932925 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.948929 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.973667 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:08 crc kubenswrapper[4867]: I0126 11:18:08.989477 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:08Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.004369 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.020134 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.030662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.030713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.030723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.030744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.030757 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.034621 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.035166 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.057479 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.072916 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.086556 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.099848 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.114174 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.126568 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.133033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.133115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.133129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.133154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.133168 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.140481 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.197428 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.210605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.225926 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.235975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.236018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.236033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.236053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.236068 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.240927 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.253069 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.271971 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.287310 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.304803 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.323606 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.339989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.340050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.340060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.340082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.340099 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.347272 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.374609 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.392124 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.412981 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.441813 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.443672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.443735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.443750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.443774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.443787 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.462170 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.477822 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.487486 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.531252 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:27:10.244783517 +0000 UTC Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.547378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.547426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.547441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.547463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.547476 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.650742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.650806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.650820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.650843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.650859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.754577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.754631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.754640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.754661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.754672 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.858195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.858258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.858271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.858292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.858306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.894002 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.911305 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.933862 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.956716 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.960737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.960766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.960792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.960807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.960815 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:09Z","lastTransitionTime":"2026-01-26T11:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:09 crc kubenswrapper[4867]: I0126 11:18:09.978781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.003605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:09Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.022350 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.045484 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.063786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.063850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.063864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.063889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.063904 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.064801 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.079746 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.105379 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.119366 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.139805 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.154951 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.166189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.166516 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.166609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.166703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.166767 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.168639 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.194099 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.205933 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.268941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.268987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.268996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.269011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.269023 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.371355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.371414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.371432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.371451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.371465 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.474877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.474927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.474937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.474953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.474965 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.531745 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.531796 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:40:00.105327391 +0000 UTC Jan 26 11:18:10 crc kubenswrapper[4867]: E0126 11:18:10.531930 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:10 crc kubenswrapper[4867]: E0126 11:18:10.532146 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:14.532121121 +0000 UTC m=+44.230696091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.563708 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.563736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.563774 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:10 crc kubenswrapper[4867]: E0126 11:18:10.563941 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:10 crc kubenswrapper[4867]: E0126 11:18:10.564087 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:10 crc kubenswrapper[4867]: E0126 11:18:10.564192 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.564456 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:10 crc kubenswrapper[4867]: E0126 11:18:10.564697 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.577161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.577193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.577202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.577245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.577262 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.580065 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.593597 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.608966 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.619754 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.630795 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.650130 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.662909 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.675385 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.679948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.680029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.680042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.680063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.680077 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.691254 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.702242 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.735486 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.763452 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.781914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.781975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.781987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.782008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.782020 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.784695 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.811594 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.825762 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.838810 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.852092 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.885095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.885151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.885161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.885180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.885192 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.988798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.988882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.988906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.988935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:10 crc kubenswrapper[4867]: I0126 11:18:10.988955 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:10Z","lastTransitionTime":"2026-01-26T11:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.094801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.094899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.094924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.094952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.094971 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.197885 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.197964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.197983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.198013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.198035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.325478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.325576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.325597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.325632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.325653 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.428546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.428612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.428626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.428652 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.428668 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.531420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.531491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.531507 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.531534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.531552 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.532281 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:26:42.118143788 +0000 UTC Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.635394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.635446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.635463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.635485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.635502 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.738654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.738695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.738712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.738733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.738751 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.841330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.841356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.841365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.841381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.841390 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.944311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.944389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.944424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.944459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:11 crc kubenswrapper[4867]: I0126 11:18:11.944481 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:11Z","lastTransitionTime":"2026-01-26T11:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.047286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.047372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.047384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.047401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.047414 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.151396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.151443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.151455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.151474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.151499 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.254281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.255054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.255070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.255085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.255865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.360183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.360268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.360279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.360298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.360309 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.463869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.463970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.464000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.464033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.464059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.533481 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:22:16.873440498 +0000 UTC Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.563921 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.563999 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.564001 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:12 crc kubenswrapper[4867]: E0126 11:18:12.564151 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.564356 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:12 crc kubenswrapper[4867]: E0126 11:18:12.564529 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:12 crc kubenswrapper[4867]: E0126 11:18:12.564661 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:12 crc kubenswrapper[4867]: E0126 11:18:12.564825 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.567162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.567264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.567286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.567316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.567335 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.670720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.670784 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.670797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.670822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.670838 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.774317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.774375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.774385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.774426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.774438 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.877214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.877282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.877292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.877335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.877348 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.980497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.980634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.980664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.980697 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:12 crc kubenswrapper[4867]: I0126 11:18:12.980716 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:12Z","lastTransitionTime":"2026-01-26T11:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.084993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.085123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.085153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.085185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.085206 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.193648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.193727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.193748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.193780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.193803 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.296959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.297564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.297678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.297832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.297928 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.401833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.401894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.401910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.401931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.401949 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.506707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.506776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.506787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.506807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.506821 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.534236 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:38:42.502449465 +0000 UTC Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.610135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.610189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.610199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.610237 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.610255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.714339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.714784 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.714920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.715059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.715309 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.819430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.819514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.819555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.819600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.819625 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.862723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.862775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.862785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.862805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.862816 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: E0126 11:18:13.884465 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:13Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.895004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.895539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.895694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.895855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.895995 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: E0126 11:18:13.913259 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:13Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.920026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.920084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.920104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.920133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.920150 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: E0126 11:18:13.944180 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:13Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.950346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.950391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.950403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.950418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.950427 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:13 crc kubenswrapper[4867]: E0126 11:18:13.968854 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:13Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.978356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.978434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.978454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.978488 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:13 crc kubenswrapper[4867]: I0126 11:18:13.978513 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:13Z","lastTransitionTime":"2026-01-26T11:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.001585 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:13Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.001756 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.004019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.004078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.004095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.004123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.004141 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.107647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.107706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.107745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.107770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.107785 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.211125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.211160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.211170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.211187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.211197 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.314121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.314170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.314181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.314201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.314213 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.418497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.418538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.418548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.418564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.418574 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.521741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.521807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.521822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.521850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.521864 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.535492 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:11:46.572926094 +0000 UTC Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.565563 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.565623 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.565627 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.565747 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.565764 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.565921 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.566028 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.566113 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.583256 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.583415 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:14 crc kubenswrapper[4867]: E0126 11:18:14.583462 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:22.583447958 +0000 UTC m=+52.282022868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.624992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.625049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.625059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.625075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.625084 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.727886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.727941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.727954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.727978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.727995 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.831538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.831592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.831607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.831632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.831650 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.905296 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/0.log" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.914076 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003" exitCode=1 Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.914149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.916197 4867 scope.go:117] "RemoveContainer" containerID="9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.961691 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:14Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.967361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.967400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.967414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.967431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.967445 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:14Z","lastTransitionTime":"2026-01-26T11:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:14 crc kubenswrapper[4867]: I0126 11:18:14.982034 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:14Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.006324 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.033973 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.051908 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.070593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.070657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.070675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.070703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.070722 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.089140 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.109135 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.129912 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.152355 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.166336 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.174342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.174398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.174418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.174443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.174462 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.204090 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"\\\\nI0126 11:18:10.643831 6165 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:10.643854 6165 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:10.643876 6165 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:10.643912 6165 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:10.643967 6165 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:10.643970 6165 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:10.643923 6165 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:10.643929 6165 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:18:10.644029 6165 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:10.643948 6165 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 11:18:10.644059 6165 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 11:18:10.644093 6165 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 11:18:10.644094 6165 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 11:18:10.644122 6165 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 11:18:10.644192 6165 factory.go:656] Stopping watch factory\\\\nI0126 11:18:10.644206 6165 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.257197 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.275491 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.280658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.280704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.280720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.280741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.280756 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.295534 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.316068 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.334037 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.368943 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.383839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.383876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.383888 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.383908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.383919 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.487765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.488287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.488306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.488337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.488360 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.536597 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:37:11.729078823 +0000 UTC Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.592904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.592979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.592993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.593011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.593023 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.695963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.696014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.696028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.696053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.696066 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.798745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.798788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.798799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.798815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.798825 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.901840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.901884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.901897 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.901916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.901927 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:15Z","lastTransitionTime":"2026-01-26T11:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.920189 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/0.log" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.923405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b"} Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.923930 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.942412 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.952193 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.964206 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:15 crc kubenswrapper[4867]: I0126 11:18:15.983511 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.005189 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.009257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.009303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.009320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.009344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.009359 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.032920 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.050615 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.067346 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.090907 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"\\\\nI0126 11:18:10.643831 6165 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:10.643854 6165 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:10.643876 6165 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:10.643912 6165 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:10.643967 6165 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:10.643970 6165 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:10.643923 6165 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:10.643929 6165 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:18:10.644029 6165 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:10.643948 6165 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 11:18:10.644059 6165 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 11:18:10.644093 6165 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 11:18:10.644094 6165 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 11:18:10.644122 6165 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 11:18:10.644192 6165 factory.go:656] Stopping watch factory\\\\nI0126 11:18:10.644206 6165 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.104425 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.113341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.113397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.113410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.113432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.113446 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.118764 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.138055 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.150057 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.161840 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.172866 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.186417 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.197443 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:16Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.215879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.215962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.215989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.216027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.216053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.318672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.318747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.318769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.318799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.318827 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.423898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.423958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.423973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.423998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.424012 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.530491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.530565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.530588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.530619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.530642 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.537029 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:00:43.268002993 +0000 UTC Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.563779 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.563831 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:16 crc kubenswrapper[4867]: E0126 11:18:16.564021 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.564083 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.564108 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:16 crc kubenswrapper[4867]: E0126 11:18:16.564282 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:16 crc kubenswrapper[4867]: E0126 11:18:16.564436 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:16 crc kubenswrapper[4867]: E0126 11:18:16.564629 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.633727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.633819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.633839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.633864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.633924 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.736964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.737035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.737053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.737081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.737101 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.840452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.840498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.840507 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.840523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.840533 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.943042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.943110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.943128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.943166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:16 crc kubenswrapper[4867]: I0126 11:18:16.943189 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:16Z","lastTransitionTime":"2026-01-26T11:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.046634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.046765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.046783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.046816 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.046835 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.150960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.151017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.151030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.151051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.151065 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.253391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.253463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.253475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.253491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.253502 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.356090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.356179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.356203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.356275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.356300 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.459014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.459082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.459092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.459106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.459121 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.538164 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:51:51.681056107 +0000 UTC Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.562169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.562279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.562303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.562327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.562345 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.665170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.665202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.665213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.665251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.665263 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.768286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.768361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.768383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.768409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.768426 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.871122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.871193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.871204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.871242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.871254 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.952979 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/1.log" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.954487 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/0.log" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.959595 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b" exitCode=1 Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.959664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.959735 4867 scope.go:117] "RemoveContainer" containerID="9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.963998 4867 scope.go:117] "RemoveContainer" containerID="cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b" Jan 26 11:18:17 crc kubenswrapper[4867]: E0126 11:18:17.964723 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.980316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.980361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.980372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.980388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.980399 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:17Z","lastTransitionTime":"2026-01-26T11:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:17 crc kubenswrapper[4867]: I0126 11:18:17.993539 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:17Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.006621 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.019270 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.030706 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.040881 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.059196 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"\\\\nI0126 11:18:10.643831 6165 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:10.643854 6165 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:10.643876 6165 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:10.643912 6165 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:10.643967 6165 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:10.643970 6165 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:10.643923 6165 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:10.643929 6165 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:18:10.644029 6165 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:10.643948 6165 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 11:18:10.644059 6165 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 11:18:10.644093 6165 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 11:18:10.644094 6165 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 11:18:10.644122 6165 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 11:18:10.644192 6165 factory.go:656] Stopping watch factory\\\\nI0126 11:18:10.644206 6165 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:17Z\\\",\\\"message\\\":\\\"ty.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z]\\\\nI0126 11:18:15.987007 6391 lb_config.go:1031] Cluster endpoints for openshift-controller-manager/controller-manager for network=default are: map[]\\\\nI0126 11:18:15.986973 6391 services_controller.go:434] Service openshift-operator-lifecycle-manager/package-server-manager-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager 147f8961-5d3f-4ca0-a8b8-5098188f7802 4671 0 2025-02-23 05:12:35 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00796458b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{N\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.077763 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.082545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.082589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.082600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.082618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.082634 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.090845 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.102543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.122391 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.137300 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.152724 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.172719 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.186199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.186286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.186306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.186338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.186353 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.187623 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.211506 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.227539 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.242946 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:18Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.288947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.288983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.288995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.289011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.289022 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.391425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.391487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.391502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.391521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.391534 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.495282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.495330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.495339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.495357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.495367 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.538385 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:26:19.951296969 +0000 UTC Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.563494 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.563497 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.563611 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:18 crc kubenswrapper[4867]: E0126 11:18:18.563734 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:18 crc kubenswrapper[4867]: E0126 11:18:18.563833 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:18 crc kubenswrapper[4867]: E0126 11:18:18.563897 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.563455 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:18 crc kubenswrapper[4867]: E0126 11:18:18.564742 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.599123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.599169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.599182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.599200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.599213 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.702750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.702797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.702809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.702827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.702839 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.805467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.805562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.805587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.805615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.805635 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.907764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.907813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.907836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.907850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.907859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:18Z","lastTransitionTime":"2026-01-26T11:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:18 crc kubenswrapper[4867]: I0126 11:18:18.965460 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/1.log" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.011523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.011574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.011584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.011601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.011611 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.114614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.114919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.114990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.115071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.115152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.218447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.218495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.218503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.218521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.218530 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.321679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.322084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.322289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.322454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.322593 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.432738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.433500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.433537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.433564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.433582 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.434505 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.448652 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.451660 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.467529 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.481735 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.501176 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.512568 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.536515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.537297 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.537405 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.537495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.537587 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.538907 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:40:36.96360081 +0000 UTC Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.540756 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.555078 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.570453 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.590590 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.602008 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.626545 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"\\\\nI0126 11:18:10.643831 6165 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:10.643854 6165 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:10.643876 6165 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:10.643912 6165 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:10.643967 6165 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:10.643970 6165 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:10.643923 6165 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:10.643929 6165 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:18:10.644029 6165 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:10.643948 6165 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 11:18:10.644059 6165 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 11:18:10.644093 6165 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 11:18:10.644094 6165 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 11:18:10.644122 6165 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 11:18:10.644192 6165 factory.go:656] Stopping watch factory\\\\nI0126 11:18:10.644206 6165 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:17Z\\\",\\\"message\\\":\\\"ty.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z]\\\\nI0126 11:18:15.987007 6391 lb_config.go:1031] Cluster endpoints for openshift-controller-manager/controller-manager for network=default are: map[]\\\\nI0126 11:18:15.986973 6391 services_controller.go:434] Service openshift-operator-lifecycle-manager/package-server-manager-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager 147f8961-5d3f-4ca0-a8b8-5098188f7802 4671 0 2025-02-23 05:12:35 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00796458b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{N\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.640130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.640191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.640200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.640217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.640253 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.645141 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.656731 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.670435 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.683901 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.699251 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.714349 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:19Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.742672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.742753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.742771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.742796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.742811 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.845389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.845436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.845449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.845467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.845476 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.948620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.948700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.948721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.948747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:19 crc kubenswrapper[4867]: I0126 11:18:19.948767 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:19Z","lastTransitionTime":"2026-01-26T11:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.052644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.052715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.052734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.052762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.052784 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.155303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.155353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.155363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.155378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.155388 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.258323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.258390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.258408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.258436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.258457 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.362158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.362212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.362254 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.362281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.362296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.465734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.465842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.465865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.465898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.465922 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.539997 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:41:54.582253201 +0000 UTC Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.563879 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.563947 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:20 crc kubenswrapper[4867]: E0126 11:18:20.564069 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.564132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.564328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:20 crc kubenswrapper[4867]: E0126 11:18:20.564297 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:20 crc kubenswrapper[4867]: E0126 11:18:20.564475 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:20 crc kubenswrapper[4867]: E0126 11:18:20.564838 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.569963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.570032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.570062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.570117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.570144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.603153 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.622103 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.638849 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.665908 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"\\\\nI0126 11:18:10.643831 6165 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:10.643854 6165 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:10.643876 6165 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:10.643912 6165 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:10.643967 6165 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:10.643970 6165 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:10.643923 6165 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:10.643929 6165 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:18:10.644029 6165 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:10.643948 6165 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 11:18:10.644059 6165 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 11:18:10.644093 6165 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 11:18:10.644094 6165 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 11:18:10.644122 6165 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 11:18:10.644192 6165 factory.go:656] Stopping watch factory\\\\nI0126 11:18:10.644206 6165 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:17Z\\\",\\\"message\\\":\\\"ty.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z]\\\\nI0126 11:18:15.987007 6391 lb_config.go:1031] Cluster endpoints for openshift-controller-manager/controller-manager for network=default are: map[]\\\\nI0126 11:18:15.986973 6391 services_controller.go:434] Service openshift-operator-lifecycle-manager/package-server-manager-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager 147f8961-5d3f-4ca0-a8b8-5098188f7802 4671 0 2025-02-23 05:12:35 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00796458b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{N\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.673387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.673441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.673456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.673478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.673491 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.690900 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.711699 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.734494 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.750579 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.766087 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.776325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.776390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.776411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.776436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.776452 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.781615 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.796499 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.810057 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.824120 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.844815 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.859004 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.874450 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.879419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.879484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.879497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.879524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.879543 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.889042 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.904368 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:20Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.982821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.982888 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.982910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.982937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:20 crc kubenswrapper[4867]: I0126 11:18:20.982958 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:20Z","lastTransitionTime":"2026-01-26T11:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.085469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.085545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.085570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.085600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.085624 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.188171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.188238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.188251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.188268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.188280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.291944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.292017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.292030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.292067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.292088 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.396089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.396167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.396200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.396262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.396283 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.499759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.499858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.499878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.499904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.499921 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.541259 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:00:01.961586776 +0000 UTC Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.602884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.602951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.602974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.603005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.603029 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.711750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.712121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.712215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.712356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.712452 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.816781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.816853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.816871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.816896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.816913 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.920892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.920960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.920982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.921015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:21 crc kubenswrapper[4867]: I0126 11:18:21.921038 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:21Z","lastTransitionTime":"2026-01-26T11:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.024177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.024254 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.024273 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.024299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.024315 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.075823 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.075971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.076024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.076137 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.076202 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:18:54.076153447 +0000 UTC m=+83.774728357 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.076311 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:54.076296452 +0000 UTC m=+83.774871592 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.076352 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.076430 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:54.076407665 +0000 UTC m=+83.774982735 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.127067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.127116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.127134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.127182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.127196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.177056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.177293 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.177333 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.177352 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.177421 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:54.177400616 +0000 UTC m=+83.875975526 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.230006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.230050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.230060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.230076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.230085 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.278642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.278840 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.278860 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.278873 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.278935 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:54.278918331 +0000 UTC m=+83.977493241 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.332975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.333043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.333056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.333075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.333114 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.435989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.436052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.436070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.436095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.436114 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.539535 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.539577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.539587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.539602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.539611 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.541744 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:40:11.402650735 +0000 UTC Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.563162 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.563215 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.563172 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.563332 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.563368 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.563490 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.563645 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.563748 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.642915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.642979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.642995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.643014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.643026 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.683920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.684144 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: E0126 11:18:22.684250 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:18:38.684216281 +0000 UTC m=+68.382791191 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.745807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.745846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.745864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.745883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.745896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.850641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.850699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.850716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.850737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.850753 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.952822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.952862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.952870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.952884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:22 crc kubenswrapper[4867]: I0126 11:18:22.952897 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:22Z","lastTransitionTime":"2026-01-26T11:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.055406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.055446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.055462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.055534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.055546 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.158037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.158101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.158121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.158150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.158172 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.261490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.261557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.261579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.261611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.261634 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.363692 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.363744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.363756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.363775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.363788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.466326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.466380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.466395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.466412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.466425 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.542351 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:53:34.189732542 +0000 UTC Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.568913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.568974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.568990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.569012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.569031 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.672195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.672278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.672294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.672326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.672346 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.776215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.776333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.776369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.776482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.776507 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.887893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.887989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.888009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.888036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.888055 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.990398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.990438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.990447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.990458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:23 crc kubenswrapper[4867]: I0126 11:18:23.990467 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:23Z","lastTransitionTime":"2026-01-26T11:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.093710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.093798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.093821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.094302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.094582 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.198225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.198321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.198338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.198364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.198381 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.297661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.297747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.297769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.297806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.297831 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.321351 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:24Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.326919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.326966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.326981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.326998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.327011 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.341472 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:24Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.345779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.345853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.345867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.345921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.345936 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.361484 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:24Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.364732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.364822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.364833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.364850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.364862 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.376472 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:24Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.380196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.380248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.380259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.380273 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.380281 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.391200 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:24Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.391371 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.393450 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.393490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.393502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.393521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.393535 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.495893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.495931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.495942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.495955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.495964 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.542917 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:29:32.167474987 +0000 UTC Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.563425 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.563492 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.563600 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.563791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.563774 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.563975 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.563953 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:24 crc kubenswrapper[4867]: E0126 11:18:24.564072 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.600708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.600817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.600850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.600873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.600886 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.704000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.704045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.704057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.704074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.704086 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.807254 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.807303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.807316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.807335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.807349 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.909663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.909733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.909751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.909777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:24 crc kubenswrapper[4867]: I0126 11:18:24.909797 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:24Z","lastTransitionTime":"2026-01-26T11:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.012259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.012330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.012363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.012396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.012419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.115530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.115664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.115687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.115715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.115736 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.218824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.218892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.218910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.218936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.218955 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.322632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.322716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.322738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.323183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.323505 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.426948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.427008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.427030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.427059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.427084 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.533531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.533582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.550209 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:58:10.972491572 +0000 UTC Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.563781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.563857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.563887 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.667910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.668006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.668029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.668053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.668117 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.770955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.771016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.771025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.771038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.771048 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.873991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.874029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.874039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.874055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.874065 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.976611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.976671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.976682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.976697 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:25 crc kubenswrapper[4867]: I0126 11:18:25.976706 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:25Z","lastTransitionTime":"2026-01-26T11:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.079515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.079586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.079605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.079631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.079649 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.182431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.182493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.182524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.182544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.182554 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.284919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.284985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.285001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.285028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.285044 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.388412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.388463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.388476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.388494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.388509 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.491795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.491837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.491846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.491863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.491874 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.551288 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:52:23.933108075 +0000 UTC Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.563757 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.563779 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.563935 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:26 crc kubenswrapper[4867]: E0126 11:18:26.564181 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:26 crc kubenswrapper[4867]: E0126 11:18:26.564360 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:26 crc kubenswrapper[4867]: E0126 11:18:26.564477 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.565402 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:26 crc kubenswrapper[4867]: E0126 11:18:26.565606 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.594300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.594362 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.594383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.594406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.594424 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.697429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.697471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.697485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.697501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.697512 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.800026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.800063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.800073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.800097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.800111 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.902833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.902881 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.902892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.902909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:26 crc kubenswrapper[4867]: I0126 11:18:26.902922 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:26Z","lastTransitionTime":"2026-01-26T11:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.005557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.005626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.005643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.005669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.005690 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.109036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.109116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.109137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.109192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.109259 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.212436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.212522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.212551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.212584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.212608 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.315503 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.315547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.315558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.315574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.315584 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.418698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.418791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.418813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.418839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.418858 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.522526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.522581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.522590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.522605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.522615 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.551643 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:24:53.932954612 +0000 UTC Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.625321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.625360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.625374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.625390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.625400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.728362 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.728439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.728459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.728483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.728500 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.831369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.831408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.831417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.831432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.831441 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.933459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.933516 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.933530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.933553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:27 crc kubenswrapper[4867]: I0126 11:18:27.933568 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:27Z","lastTransitionTime":"2026-01-26T11:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.036901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.036947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.036957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.036972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.036982 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.140627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.140674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.140682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.140696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.140705 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.243765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.243845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.243866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.243886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.243898 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.347057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.347192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.347212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.347288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.347312 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.450728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.450798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.450823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.450853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.450878 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.552244 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:18:02.299154922 +0000 UTC Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.554176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.554309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.554342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.554372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.554393 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.562923 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.562917 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:28 crc kubenswrapper[4867]: E0126 11:18:28.563332 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.563393 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:28 crc kubenswrapper[4867]: E0126 11:18:28.563554 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:28 crc kubenswrapper[4867]: E0126 11:18:28.563658 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.563087 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:28 crc kubenswrapper[4867]: E0126 11:18:28.564329 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.657507 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.657594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.657611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.657636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.657657 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.760098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.760170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.760214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.760260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.760272 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.862762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.862828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.862844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.862870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.862888 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.964735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.964775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.964785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.964798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:28 crc kubenswrapper[4867]: I0126 11:18:28.964808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:28Z","lastTransitionTime":"2026-01-26T11:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.067094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.067156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.067178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.067203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.067258 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.169257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.169305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.169340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.169355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.169365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.273190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.273361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.273441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.273466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.273481 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.375989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.376039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.376050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.376069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.376081 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.479680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.479781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.479802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.479830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.479882 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.553201 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:13:50.586892735 +0000 UTC Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.582171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.582206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.582258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.582277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.582290 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.685002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.685321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.685337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.685353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.685364 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.788184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.788296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.788314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.788335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.788348 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.891355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.891407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.891418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.891439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.891451 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.994629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.994692 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.994714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.994747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:29 crc kubenswrapper[4867]: I0126 11:18:29.994767 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:29Z","lastTransitionTime":"2026-01-26T11:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.098717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.098770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.098786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.098813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.098832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.202026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.202065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.202074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.202087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.202099 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.305057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.305105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.305118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.305133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.305151 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.407243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.407349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.407382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.407398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.407409 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.509764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.509840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.509864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.509898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.509921 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.554091 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:59:30.411312815 +0000 UTC Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.563753 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.563809 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:30 crc kubenswrapper[4867]: E0126 11:18:30.563952 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.564018 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:30 crc kubenswrapper[4867]: E0126 11:18:30.564343 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:30 crc kubenswrapper[4867]: E0126 11:18:30.564440 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.563656 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:30 crc kubenswrapper[4867]: E0126 11:18:30.564548 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.564857 4867 scope.go:117] "RemoveContainer" containerID="cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.594216 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.611726 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.612618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.612678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.612695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.612723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.612740 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.633403 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.650838 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.663413 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.683406 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.699621 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.715401 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.715616 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.715646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.715658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.715676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.715689 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.739334 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.751203 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.773624 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ccaf33118999fa5bccb1930803a8cea6557462756c37135056ea8a5ab813003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:13Z\\\",\\\"message\\\":\\\"\\\\nI0126 11:18:10.643831 6165 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:10.643854 6165 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:10.643876 6165 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:10.643912 6165 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:10.643967 6165 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:10.643970 6165 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:10.643923 6165 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:10.643929 6165 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:18:10.644029 6165 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:10.643948 6165 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 11:18:10.644059 6165 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 11:18:10.644093 6165 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 11:18:10.644094 6165 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 11:18:10.644122 6165 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 11:18:10.644192 6165 factory.go:656] Stopping watch factory\\\\nI0126 11:18:10.644206 6165 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:17Z\\\",\\\"message\\\":\\\"ty.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z]\\\\nI0126 11:18:15.987007 6391 lb_config.go:1031] Cluster endpoints for openshift-controller-manager/controller-manager for network=default are: map[]\\\\nI0126 11:18:15.986973 6391 services_controller.go:434] Service openshift-operator-lifecycle-manager/package-server-manager-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager 147f8961-5d3f-4ca0-a8b8-5098188f7802 4671 0 2025-02-23 05:12:35 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00796458b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{N\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.793561 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.805415 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.818209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.818296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.818310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.818329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.818341 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.819404 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.831986 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.846541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.859378 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.872517 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.895643 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.912163 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.920611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.920671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.920684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.920700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.920711 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:30Z","lastTransitionTime":"2026-01-26T11:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.932119 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.947382 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.961326 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:30 crc kubenswrapper[4867]: I0126 11:18:30.989139 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:17Z\\\",\\\"message\\\":\\\"ty.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z]\\\\nI0126 11:18:15.987007 6391 lb_config.go:1031] Cluster endpoints for openshift-controller-manager/controller-manager for network=default are: map[]\\\\nI0126 11:18:15.986973 6391 services_controller.go:434] Service openshift-operator-lifecycle-manager/package-server-manager-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager 147f8961-5d3f-4ca0-a8b8-5098188f7802 4671 0 2025-02-23 05:12:35 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00796458b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{N\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:30Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.005492 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.017702 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/1.log" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.020312 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.020772 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.021287 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.022276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.022302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.022310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.022321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.022330 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.034351 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.047652 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.059809 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.073418 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.086802 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.096833 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.109597 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.124179 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.124626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.124675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.124688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.124709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.124723 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.143470 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.156866 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.191165 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:17Z\\\",\\\"message\\\":\\\"ty.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z]\\\\nI0126 11:18:15.987007 6391 lb_config.go:1031] Cluster endpoints for openshift-controller-manager/controller-manager for network=default are: map[]\\\\nI0126 11:18:15.986973 6391 services_controller.go:434] Service openshift-operator-lifecycle-manager/package-server-manager-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager 147f8961-5d3f-4ca0-a8b8-5098188f7802 4671 0 2025-02-23 05:12:35 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00796458b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{N\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.219322 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.227681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.227719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.227729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.227744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.227754 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.246079 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.265528 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.278456 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.290988 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.305025 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.327119 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.330052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.330106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.330121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.330140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.330151 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.341785 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.361878 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.378681 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.390418 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.404312 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.420272 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.432694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.432739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.432748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.432762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.432773 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.434535 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.450934 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.475066 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.487754 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:31Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.534988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.535025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.535033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.535048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.535057 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.554473 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:58:30.323644593 +0000 UTC Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.638574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.638615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.638631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.638646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.638654 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.742137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.742176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.742184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.742199 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.742208 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.844854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.844937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.844955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.844982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.845005 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.949173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.949316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.949341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.949392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:31 crc kubenswrapper[4867]: I0126 11:18:31.949419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:31Z","lastTransitionTime":"2026-01-26T11:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.031647 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/2.log" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.032487 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/1.log" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.035452 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2" exitCode=1 Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.035530 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.035591 4867 scope.go:117] "RemoveContainer" containerID="cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.036888 4867 scope.go:117] "RemoveContainer" containerID="22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2" Jan 26 11:18:32 crc kubenswrapper[4867]: E0126 11:18:32.037273 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.051931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.052014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.052034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.052061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.052076 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.076147 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.096114 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.117835 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.138384 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.154867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.154917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.154935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.154954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.154967 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.156193 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.187132 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8dd6742489d18b8ecac8b1a2bebed1c96dcc42086a9aeb709fc228aa34d23b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:17Z\\\",\\\"message\\\":\\\"ty.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:15Z is after 2025-08-24T17:21:41Z]\\\\nI0126 11:18:15.987007 6391 lb_config.go:1031] Cluster endpoints for openshift-controller-manager/controller-manager for network=default are: map[]\\\\nI0126 11:18:15.986973 6391 services_controller.go:434] Service openshift-operator-lifecycle-manager/package-server-manager-metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager 147f8961-5d3f-4ca0-a8b8-5098188f7802 4671 0 2025-02-23 05:12:35 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00796458b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{N\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.201416 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.213625 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.223474 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.240396 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.258639 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.260301 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.260338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.260349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.260368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.260417 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.275587 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.291671 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.306562 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.320742 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.336551 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.349201 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.361190 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:32Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.362664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.362713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.362731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.362819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.362840 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.466159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.466303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.466331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.466389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.466408 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.554650 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:16:21.000108316 +0000 UTC Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.563409 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.563471 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.563566 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.563409 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:32 crc kubenswrapper[4867]: E0126 11:18:32.563617 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:32 crc kubenswrapper[4867]: E0126 11:18:32.564346 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:32 crc kubenswrapper[4867]: E0126 11:18:32.571373 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:32 crc kubenswrapper[4867]: E0126 11:18:32.571560 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.577292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.577376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.577404 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.577443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.577467 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.681281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.681355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.681371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.681398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.681414 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.784808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.784879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.784903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.784933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.785027 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.888072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.888141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.888158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.888182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.888200 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.990898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.990976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.991012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.991036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:32 crc kubenswrapper[4867]: I0126 11:18:32.991053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:32Z","lastTransitionTime":"2026-01-26T11:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.040385 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/2.log" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.043384 4867 scope.go:117] "RemoveContainer" containerID="22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2" Jan 26 11:18:33 crc kubenswrapper[4867]: E0126 11:18:33.043579 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.072181 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.088383 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.093372 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.093419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.093435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.093456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.093473 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.107909 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.124054 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.145111 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.167991 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.184764 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.196675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.196707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.196716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.196731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.196741 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.202235 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.216909 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.233832 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.253675 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.268967 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.281323 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.297937 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.299029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.299087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.299100 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.299117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.299129 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.313490 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.331881 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.357545 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.374891 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:33Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.401675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.401710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.401720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.401734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.401745 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.504086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.504152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.504168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.504188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.504204 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.555527 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:40:27.401386242 +0000 UTC Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.607479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.607520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.607530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.607547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.607560 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.710667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.710731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.710743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.710762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.710777 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.815532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.815594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.815611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.815635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.815660 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.921209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.921602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.921681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.921856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:33 crc kubenswrapper[4867]: I0126 11:18:33.921970 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:33Z","lastTransitionTime":"2026-01-26T11:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.025017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.025063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.025081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.025110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.025131 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.128499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.128575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.128587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.128609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.128635 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.231966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.232028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.232039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.232057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.232068 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.334527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.334592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.334606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.334630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.334646 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.438350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.438423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.438435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.438452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.438462 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.541346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.541427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.541447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.541475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.541496 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.556618 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:33:20.877138792 +0000 UTC Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.563346 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.563477 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.563484 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.563690 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.563757 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.563630 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.563864 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.563950 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.594722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.594766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.594783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.594804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.594822 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.611316 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:34Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.614797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.614839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.614852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.614871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.614884 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.630898 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:34Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.635540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.635578 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.635592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.635614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.635626 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.653965 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:34Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.659332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.659364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.659374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.659422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.659435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.675776 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:34Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.679766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.679795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.679806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.679819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.679830 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.692877 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:34Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:34 crc kubenswrapper[4867]: E0126 11:18:34.692998 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.695468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.695561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.695587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.695619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.695640 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.798641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.798700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.798716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.798740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.798758 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.902069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.902114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.902126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.902142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:34 crc kubenswrapper[4867]: I0126 11:18:34.902155 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:34Z","lastTransitionTime":"2026-01-26T11:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.005692 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.005762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.005780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.005811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.005829 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.109257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.109299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.109309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.109326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.109339 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.213300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.213398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.213410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.213434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.213448 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.316417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.316465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.316476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.316497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.316510 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.419961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.420013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.420025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.420045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.420058 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.523431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.523481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.523491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.523509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.523519 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.557831 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:54:59.034838293 +0000 UTC Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.626427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.626497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.626509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.626529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.626541 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.729641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.729720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.729738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.729789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.729810 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.832203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.832259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.832270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.832287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.832298 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.934295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.934360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.934371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.934387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:35 crc kubenswrapper[4867]: I0126 11:18:35.934401 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:35Z","lastTransitionTime":"2026-01-26T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.037365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.037413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.037426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.037447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.037462 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.139770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.139850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.139861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.139883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.139896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.242685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.242745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.242770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.242791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.242802 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.345890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.345932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.345941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.345958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.345969 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.449505 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.449555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.449566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.449587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.449598 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.552888 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.552954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.552968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.552990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.553002 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.558309 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:48:00.15643476 +0000 UTC Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.562789 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.562833 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.562789 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:36 crc kubenswrapper[4867]: E0126 11:18:36.562936 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:36 crc kubenswrapper[4867]: E0126 11:18:36.563002 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:36 crc kubenswrapper[4867]: E0126 11:18:36.563055 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.563239 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:36 crc kubenswrapper[4867]: E0126 11:18:36.563308 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.655801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.655871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.655888 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.655912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.655928 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.760157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.760202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.760236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.760258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.760271 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.863986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.864046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.864061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.864085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.864102 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.966946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.966993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.967004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.967022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:36 crc kubenswrapper[4867]: I0126 11:18:36.967035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:36Z","lastTransitionTime":"2026-01-26T11:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.070092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.070158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.070178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.070208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.070254 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.173321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.173375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.173391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.173469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.173486 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.276162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.276417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.276425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.276444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.276452 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.379095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.379163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.379179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.379203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.379233 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.482071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.482137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.482148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.482171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.482184 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.559295 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:43:12.686404953 +0000 UTC Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.585245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.585288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.585299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.585320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.585332 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.687937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.687995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.688008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.688028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.688041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.791449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.791525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.791543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.791571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.791609 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.894294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.894345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.894356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.894377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.894393 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.997120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.997172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.997186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.997208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:37 crc kubenswrapper[4867]: I0126 11:18:37.997240 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:37Z","lastTransitionTime":"2026-01-26T11:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.100920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.100976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.100992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.101013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.101027 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.204198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.204268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.204278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.204299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.204318 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.307255 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.307294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.307302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.307318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.307331 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.410129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.410175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.410186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.410201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.410213 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.513349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.513390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.513399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.513413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.513424 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.560202 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:30:57.213489458 +0000 UTC Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.563629 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.563676 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.563629 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:38 crc kubenswrapper[4867]: E0126 11:18:38.563793 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.563823 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:38 crc kubenswrapper[4867]: E0126 11:18:38.563992 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:38 crc kubenswrapper[4867]: E0126 11:18:38.564154 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:38 crc kubenswrapper[4867]: E0126 11:18:38.564212 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.616889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.616942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.616951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.616968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.616978 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.720460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.720532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.720545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.720566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.720579 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.762408 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:38 crc kubenswrapper[4867]: E0126 11:18:38.762648 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:38 crc kubenswrapper[4867]: E0126 11:18:38.762823 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:19:10.762793525 +0000 UTC m=+100.461368435 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.823477 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.823527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.823543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.823568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.823581 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.926436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.926510 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.926519 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.926539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:38 crc kubenswrapper[4867]: I0126 11:18:38.926550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:38Z","lastTransitionTime":"2026-01-26T11:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.028494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.028539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.028550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.028570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.028581 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.132121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.132527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.132724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.132837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.132937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.235804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.235865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.235886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.235908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.235923 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.339397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.339437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.339449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.339467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.339479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.447844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.447898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.447911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.447931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.447944 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.551610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.551682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.551702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.551732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.551749 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.560444 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:06:27.015998923 +0000 UTC Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.654034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.654089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.654099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.654116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.654125 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.756929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.756976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.756987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.757002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.757017 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.859765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.859810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.859821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.859839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.859854 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.962882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.962922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.962934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.962949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:39 crc kubenswrapper[4867]: I0126 11:18:39.962961 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:39Z","lastTransitionTime":"2026-01-26T11:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.065430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.065476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.065487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.065512 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.065525 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.067565 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/0.log" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.067621 4867 generic.go:334] "Generic (PLEG): container finished" podID="dc37e5d1-ba44-4a54-ac36-ab7cdef17212" containerID="519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80" exitCode=1 Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.067653 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerDied","Data":"519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.068042 4867 scope.go:117] "RemoveContainer" containerID="519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.098978 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.111730 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.127579 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.139938 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.154823 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.168028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.168076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.168086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.168106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.168118 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.178551 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.194990 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.206275 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.221296 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.234915 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.249886 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.270667 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.270974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.271013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.271068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.271091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.271103 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.285320 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.299721 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.313574 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.327963 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.337684 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.346495 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.373971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.374032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.374047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.374066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.374081 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.476571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.476636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.476648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.476667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.476676 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.560575 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:34:36.611163326 +0000 UTC Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.562975 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.563067 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.563185 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:40 crc kubenswrapper[4867]: E0126 11:18:40.563529 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.563622 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:40 crc kubenswrapper[4867]: E0126 11:18:40.563834 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:40 crc kubenswrapper[4867]: E0126 11:18:40.563898 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:40 crc kubenswrapper[4867]: E0126 11:18:40.563977 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.580150 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.580500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.580556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.580568 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.580584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.580595 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.602209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.614417 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.624740 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.646345 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.659866 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.673102 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.683888 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.683990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.684011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.684037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.684055 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.686306 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.699263 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.710683 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.720757 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.732906 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.748385 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.762576 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.781743 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.787135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.787182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.787192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.787212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.787242 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.794627 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.805339 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.818771 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:40Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.890717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.890763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.890779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.890801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.890818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.993501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.993551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.993561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.993587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:40 crc kubenswrapper[4867]: I0126 11:18:40.993599 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:40Z","lastTransitionTime":"2026-01-26T11:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.073955 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/0.log" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.074027 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerStarted","Data":"0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.089965 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.095576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.095603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.095614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.095630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.095642 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.102291 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.117005 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.130016 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.144905 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.168480 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.179779 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.194338 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.198347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.198412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.198427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.198460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.198471 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.207871 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.218033 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.239005 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.251752 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.263815 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.274545 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.287929 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.300294 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.300722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.300822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.300847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.300867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.300906 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.312994 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.324105 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:41Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.404194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.404262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.404279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.404299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.404312 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.507320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.507386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.507398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.507425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.507442 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.560939 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:30:17.289881903 +0000 UTC Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.609812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.609905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.609917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.609935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.609948 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.712553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.712617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.712628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.712651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.712663 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.815375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.815442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.815461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.815487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.815504 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.917667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.917729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.917745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.917770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:41 crc kubenswrapper[4867]: I0126 11:18:41.917788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:41Z","lastTransitionTime":"2026-01-26T11:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.020958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.021042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.021065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.021097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.021119 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.123974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.124039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.124050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.124067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.124077 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.226558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.226619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.226629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.226647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.226660 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.329605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.329657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.329676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.329700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.329715 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.432281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.432335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.432346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.432367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.432383 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.534620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.534673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.534689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.534712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.534727 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.561747 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:04:24.724492783 +0000 UTC Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.563121 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.563185 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.563281 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.563353 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:42 crc kubenswrapper[4867]: E0126 11:18:42.563316 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:42 crc kubenswrapper[4867]: E0126 11:18:42.563552 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:42 crc kubenswrapper[4867]: E0126 11:18:42.563629 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:42 crc kubenswrapper[4867]: E0126 11:18:42.563751 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.637698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.637745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.637757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.637777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.637792 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.741106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.741156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.741166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.741184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.741195 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.844360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.844416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.844426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.844447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.844465 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.947250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.947295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.947305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.947323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:42 crc kubenswrapper[4867]: I0126 11:18:42.947336 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:42Z","lastTransitionTime":"2026-01-26T11:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.050293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.050353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.050367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.050389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.050401 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.153101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.153150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.153165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.153188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.153206 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.256505 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.256554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.256566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.256583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.256595 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.359981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.360047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.360063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.360088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.360102 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.462565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.462628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.462640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.462658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.462670 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.562517 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:07:16.311641669 +0000 UTC Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.565924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.565976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.565989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.566012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.566028 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.669538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.669614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.669628 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.669651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.669666 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.772958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.773014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.773027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.773049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.773062 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.876469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.876517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.876532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.876553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.876570 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.979783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.979837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.979849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.979869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:43 crc kubenswrapper[4867]: I0126 11:18:43.979885 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:43Z","lastTransitionTime":"2026-01-26T11:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.082463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.082525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.082534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.082557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.082572 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.185935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.186032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.186045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.186064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.186076 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.288770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.288825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.288839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.288863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.288878 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.391485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.391528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.391537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.391554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.391565 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.495151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.495259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.495279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.495311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.495332 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.563469 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:08:50.087657775 +0000 UTC Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.563755 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.563975 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.564582 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.564982 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.565034 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.565854 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.566145 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.566375 4867 scope.go:117] "RemoveContainer" containerID="22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2" Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.566388 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.566853 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.583578 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.599465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.599526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.599551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.599581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.599609 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.703107 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.703148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.703159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.703178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.703191 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.714623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.714665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.714679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.714695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.714706 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.730283 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:44Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.733963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.734008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.734020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.734038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.734051 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.747276 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:44Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.751108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.751142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.751153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.751169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.751181 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.768748 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:44Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.773040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.773094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.773110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.773130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.773145 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.788616 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:44Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.793008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.793064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.793074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.793092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.793109 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.805659 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:44Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:44 crc kubenswrapper[4867]: E0126 11:18:44.805829 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.807850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.807894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.807912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.807938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.807957 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.910572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.910620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.910634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.910653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:44 crc kubenswrapper[4867]: I0126 11:18:44.910671 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:44Z","lastTransitionTime":"2026-01-26T11:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.013282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.013339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.013349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.013371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.013384 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.116827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.116927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.116954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.116987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.117053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.219312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.219361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.219374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.219389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.219400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.321142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.321179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.321190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.321209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.321250 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.424359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.424417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.424428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.424453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.424466 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.528315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.528358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.528368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.528385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.528395 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.564302 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:20:02.946737974 +0000 UTC Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.631661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.631706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.631718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.631734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.631744 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.735177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.735241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.735251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.735270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.735280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.838447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.838508 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.838522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.838543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.838557 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.941499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.941686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.941702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.941723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:45 crc kubenswrapper[4867]: I0126 11:18:45.941738 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:45Z","lastTransitionTime":"2026-01-26T11:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.044863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.044916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.044925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.044943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.044957 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.147056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.147105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.147116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.147133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.147146 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.250207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.250276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.250288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.250306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.250318 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.353441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.353501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.353515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.353537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.353551 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.457080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.457135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.457146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.457168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.457182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.559496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.559546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.559560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.559580 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.559592 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.563031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.563043 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.563056 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.563116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:46 crc kubenswrapper[4867]: E0126 11:18:46.563270 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:46 crc kubenswrapper[4867]: E0126 11:18:46.563371 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:46 crc kubenswrapper[4867]: E0126 11:18:46.563442 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:46 crc kubenswrapper[4867]: E0126 11:18:46.563975 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.567515 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:24:05.810100462 +0000 UTC Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.661916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.661979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.661992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.662013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.662028 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.765579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.765632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.765647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.765665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.765676 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.868431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.868475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.868485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.868504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.868518 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.971752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.971846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.971858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.971880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:46 crc kubenswrapper[4867]: I0126 11:18:46.971896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:46Z","lastTransitionTime":"2026-01-26T11:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.074996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.075066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.075081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.075101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.075114 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.178185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.178305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.178319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.178344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.178356 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.282123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.282178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.282197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.282259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.282286 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.385588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.385640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.385650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.385669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.385684 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.491174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.491237 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.491249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.491270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.491282 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.567902 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:03:16.702054773 +0000 UTC Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.594288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.594333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.594343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.594361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.594372 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.697572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.697618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.697630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.697649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.697662 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.800751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.800826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.800850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.800922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.800943 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.904090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.904432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.904622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.904758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:47 crc kubenswrapper[4867]: I0126 11:18:47.904891 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:47Z","lastTransitionTime":"2026-01-26T11:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.008305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.008643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.008706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.008797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.008869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.111315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.111352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.111361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.111374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.111383 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.215109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.215164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.215173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.215191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.215202 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.318532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.318601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.318625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.318658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.318682 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.421472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.421521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.421535 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.421556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.421572 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.524792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.524853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.524878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.524908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.524929 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.563038 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.563114 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.563193 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:48 crc kubenswrapper[4867]: E0126 11:18:48.563198 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.563309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:48 crc kubenswrapper[4867]: E0126 11:18:48.563401 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:48 crc kubenswrapper[4867]: E0126 11:18:48.563458 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:48 crc kubenswrapper[4867]: E0126 11:18:48.563555 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.568185 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:36:05.66509079 +0000 UTC Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.627982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.628484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.628709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.628927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.629169 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.731651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.731709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.731727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.731750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.731766 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.835972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.836382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.836517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.836650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.836735 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.940856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.941293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.941468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.941656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:48 crc kubenswrapper[4867]: I0126 11:18:48.941793 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:48Z","lastTransitionTime":"2026-01-26T11:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.044998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.045036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.045049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.045067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.045080 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.151539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.151632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.151809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.151893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.151914 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.255574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.255644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.255661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.255684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.255699 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.358524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.358591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.358617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.358651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.358676 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.461925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.461990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.462006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.462033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.462049 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.564999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.565068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.565087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.565112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.565132 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.569301 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:27:46.102828932 +0000 UTC Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.668488 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.668770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.668789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.668815 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.668839 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.772793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.773091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.773154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.773183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.773200 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.876571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.876643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.876770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.876812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.876836 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.980947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.981009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.981028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.981065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:49 crc kubenswrapper[4867]: I0126 11:18:49.981115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:49Z","lastTransitionTime":"2026-01-26T11:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.084193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.084291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.084315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.084343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.084363 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.187859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.187927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.187945 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.187972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.187996 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.290842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.290903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.290920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.290944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.290963 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.393926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.393957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.393970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.393985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.393996 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.499186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.499305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.499330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.499362 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.499380 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.563283 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.563382 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.563412 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.563736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:50 crc kubenswrapper[4867]: E0126 11:18:50.563732 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:50 crc kubenswrapper[4867]: E0126 11:18:50.563992 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:50 crc kubenswrapper[4867]: E0126 11:18:50.563957 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:50 crc kubenswrapper[4867]: E0126 11:18:50.564035 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.569823 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:44:12.65114208 +0000 UTC Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.576122 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.590036 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.602929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.603013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.603040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.603102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.603130 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.603439 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.616916 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.635604 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.657441 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.672198 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.691484 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.706808 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.707027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.707074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.707085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.707127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.707157 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.724007 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.737356 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.751879 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.766013 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.781306 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.796020 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.810542 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.811059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.811098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.811113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.811132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.811145 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.824389 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.838606 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b812c7c-622a-4255-ae3d-e48a62132126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ca8a196d11248401898d6c6591931638d1ebf8675414d0e588454dbc1da626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.854062 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:50Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.913143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.913188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.913203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.913260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:50 crc kubenswrapper[4867]: I0126 11:18:50.913280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:50Z","lastTransitionTime":"2026-01-26T11:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.015186 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.015588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.015674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.015780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.015879 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.118650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.118701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.118714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.118738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.118752 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.221604 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.221668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.221689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.221722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.221745 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.324794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.324904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.324929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.324955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.324973 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.427796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.427865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.427890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.427918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.427935 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.531045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.531090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.531114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.531130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.531142 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.570764 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:49:40.132956517 +0000 UTC Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.634284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.634357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.634373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.634393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.634407 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.737495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.737554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.737569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.737589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.737605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.840627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.840674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.840685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.840706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.840720 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.944172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.944252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.944263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.944286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:51 crc kubenswrapper[4867]: I0126 11:18:51.944307 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:51Z","lastTransitionTime":"2026-01-26T11:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.047554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.047609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.047621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.047643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.047669 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.150540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.150620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.150641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.150666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.150681 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.253681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.253737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.253749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.253767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.253778 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.356298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.356339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.356355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.356376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.356389 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.459867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.459931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.459942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.459963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.459976 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.562810 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.562927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.562926 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:52 crc kubenswrapper[4867]: E0126 11:18:52.563060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.563090 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:52 crc kubenswrapper[4867]: E0126 11:18:52.563270 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:52 crc kubenswrapper[4867]: E0126 11:18:52.563328 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.563501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.563557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.563567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: E0126 11:18:52.563499 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.563585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.563598 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.571139 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:59:39.760630348 +0000 UTC Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.666643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.666696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.666710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.666731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.666746 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.770385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.770465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.770492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.770523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.770547 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.873733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.873807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.873833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.873865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.873889 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.977263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.977317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.977354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.977376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:52 crc kubenswrapper[4867]: I0126 11:18:52.977388 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:52Z","lastTransitionTime":"2026-01-26T11:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.080940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.081013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.081030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.081055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.081075 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.184958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.185037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.185051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.185075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.185089 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.288088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.288444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.288507 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.288549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.288574 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.391676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.391749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.391768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.391799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.391820 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.495169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.495292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.495324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.495355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.495375 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.571753 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:19:42.922315642 +0000 UTC Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.598386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.598428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.598438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.598456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.598465 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.700985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.701020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.701030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.701043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.701053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.803979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.804021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.804035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.804057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.804074 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.906629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.906666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.906677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.906693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:53 crc kubenswrapper[4867]: I0126 11:18:53.906711 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:53Z","lastTransitionTime":"2026-01-26T11:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.009002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.009041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.009052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.009065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.009074 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.112327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.112398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.112421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.112454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.112483 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.139516 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.139649 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.139684 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.139651437 +0000 UTC m=+147.838226387 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.139752 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.139811 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.139796831 +0000 UTC m=+147.838371741 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.139854 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.140005 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.140058 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.140044357 +0000 UTC m=+147.838619307 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.216355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.216426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.216445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.216474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.216496 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.241185 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.241572 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.241650 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.241672 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.241788 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.241757897 +0000 UTC m=+147.940332847 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.318898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.318932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.318941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.318957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.318966 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.342752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.342959 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.342998 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.343013 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.343095 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.343070046 +0000 UTC m=+148.041645016 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.421119 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.421158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.421169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.421184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.421194 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.524489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.524561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.524585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.524613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.524639 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.563348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.563389 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.563422 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.563503 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.563544 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.563693 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.563728 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.563802 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.572081 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:42:44.224772239 +0000 UTC Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.627620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.627663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.627672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.627688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.627698 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.730301 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.730344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.730360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.730381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.730395 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.833177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.833262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.833281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.833305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.833324 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.927502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.927574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.927593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.927619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.927638 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.946213 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.952744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.952808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.952830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.952858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.952879 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.972410 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.977379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.977442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.977457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.977480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.977496 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:54 crc kubenswrapper[4867]: E0126 11:18:54.991644 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:54Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.996396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.996462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.996486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.996517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:54 crc kubenswrapper[4867]: I0126 11:18:54.996542 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:54Z","lastTransitionTime":"2026-01-26T11:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: E0126 11:18:55.015746 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.019834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.019961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.020058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.020159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.020292 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: E0126 11:18:55.035015 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:18:55Z is after 2025-08-24T17:21:41Z" Jan 26 11:18:55 crc kubenswrapper[4867]: E0126 11:18:55.035131 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.037008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.037039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.037050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.037067 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.037083 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.139754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.139801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.139810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.139826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.139835 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.243538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.243608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.243621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.243648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.243664 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.347086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.347141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.347155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.347182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.347197 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.450022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.450081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.450095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.450118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.450130 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.552371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.552424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.552434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.552453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.552465 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.572692 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:08:12.61684891 +0000 UTC Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.655493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.655566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.655585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.655606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.655621 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.757709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.757801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.757818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.757837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.757848 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.859924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.859954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.859963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.859975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.859984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.963448 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.963540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.963573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.963607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:55 crc kubenswrapper[4867]: I0126 11:18:55.963629 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:55Z","lastTransitionTime":"2026-01-26T11:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.069376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.069444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.069462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.069486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.069504 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.172061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.172133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.172152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.172181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.172256 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.275065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.275134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.275153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.275181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.275203 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.381911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.381970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.381983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.382005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.382020 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.486314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.486356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.486365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.486381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.486392 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.563199 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.563356 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.563423 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.563552 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:56 crc kubenswrapper[4867]: E0126 11:18:56.563540 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:56 crc kubenswrapper[4867]: E0126 11:18:56.563747 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:56 crc kubenswrapper[4867]: E0126 11:18:56.564022 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:56 crc kubenswrapper[4867]: E0126 11:18:56.564149 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.573438 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:34:31.294447882 +0000 UTC Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.588534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.588567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.588577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.588592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.588623 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.690926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.690971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.690988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.691005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.691017 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.794386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.794464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.794485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.794564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.794586 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.898044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.898107 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.898125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.898153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:56 crc kubenswrapper[4867]: I0126 11:18:56.898172 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:56Z","lastTransitionTime":"2026-01-26T11:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.001445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.001537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.001567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.001608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.001635 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.104381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.104459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.104505 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.104538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.104564 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.207710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.207754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.207764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.207781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.207799 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.310187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.310498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.310514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.310529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.310538 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.412655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.412704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.412719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.412739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.412752 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.515694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.516136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.516468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.516703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.516895 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.574492 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:29:52.264416657 +0000 UTC Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.620276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.620333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.620345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.620366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.620385 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.723596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.724086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.724306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.724486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.724626 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.827776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.827876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.827896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.827921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.827940 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.931114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.931176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.931196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.931250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:57 crc kubenswrapper[4867]: I0126 11:18:57.931268 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:57Z","lastTransitionTime":"2026-01-26T11:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.034707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.034766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.034778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.034801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.034826 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.137726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.137777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.137793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.137816 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.137831 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.240373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.240433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.240452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.240475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.240497 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.343471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.343519 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.343529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.343544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.343558 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.446798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.447197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.447210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.447253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.447270 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.549772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.550149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.550261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.550343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.550400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.563051 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.563111 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:18:58 crc kubenswrapper[4867]: E0126 11:18:58.563477 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.563252 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:18:58 crc kubenswrapper[4867]: E0126 11:18:58.563501 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.563111 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:18:58 crc kubenswrapper[4867]: E0126 11:18:58.563895 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:18:58 crc kubenswrapper[4867]: E0126 11:18:58.564000 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.564280 4867 scope.go:117] "RemoveContainer" containerID="22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.574908 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:23:06.62629015 +0000 UTC Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.659020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.659086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.659115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.659148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.659169 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.763829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.763927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.763952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.763990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.764015 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.867556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.867625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.867645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.867672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.867693 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.971096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.971181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.971521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.971610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:58 crc kubenswrapper[4867]: I0126 11:18:58.971632 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:58Z","lastTransitionTime":"2026-01-26T11:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.075724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.075790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.075806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.075831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.075848 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.179355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.179420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.179437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.179461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.179479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.283494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.283586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.283605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.283636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.283658 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.386799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.386872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.386892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.386921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.386942 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.491112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.491188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.491210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.491278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.491301 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.575505 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:33:22.708211127 +0000 UTC Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.594376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.594479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.594495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.594522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.594539 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.697952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.698022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.698040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.698068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.698090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.802081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.802144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.802167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.802200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.802255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.904783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.904834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.904851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.904873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:18:59 crc kubenswrapper[4867]: I0126 11:18:59.904890 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:18:59Z","lastTransitionTime":"2026-01-26T11:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.008861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.008948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.008962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.009023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.009046 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.115771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.115846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.115866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.115897 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.115921 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.219591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.219718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.219751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.219792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.219818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.323896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.323960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.323979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.324005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.324023 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.443200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.443326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.443345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.443380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.443400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.546450 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.546499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.546511 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.546529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.546540 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.563060 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.563180 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:00 crc kubenswrapper[4867]: E0126 11:19:00.563345 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.563391 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.563458 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:00 crc kubenswrapper[4867]: E0126 11:19:00.563412 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:00 crc kubenswrapper[4867]: E0126 11:19:00.563596 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:00 crc kubenswrapper[4867]: E0126 11:19:00.563641 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.576411 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:21:13.49369223 +0000 UTC Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.582588 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.599374 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.617788 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.632489 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.646478 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b812c7c-622a-4255-ae3d-e48a62132126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ca8a196d11248401898d6c6591931638d1ebf8675414d0e588454dbc1da626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.650588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.650656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.650672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.650693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.650707 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.662764 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.677975 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.706300 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.730025 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.742279 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.753146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.753188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.753197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.753212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.753233 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.762723 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.778114 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.802289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.820947 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.833955 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.844966 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.854311 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.855445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.855470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.855479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.855493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.855502 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.862093 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.877446 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:00Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.957744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.957786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.957797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.957812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:00 crc kubenswrapper[4867]: I0126 11:19:00.957822 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:00Z","lastTransitionTime":"2026-01-26T11:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.061107 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.061146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.061156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.061172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.061182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.150872 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/2.log" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.153501 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.154496 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.165315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.165352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.165371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.165387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.165400 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.173999 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.190684 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.206210 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.219305 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b812c7c-622a-4255-ae3d-e48a62132126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ca8a196d11248401898d6c6591931638d1ebf8675414d0e588454dbc1da626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.235387 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.249193 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.264163 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.268535 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.268588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.268601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.268625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.268640 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.277305 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.291570 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.308628 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.324510 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.336127 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.364364 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.373259 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.373303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.373318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.373402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.373419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.411810 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.426512 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.442818 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.456908 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.469280 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.476302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.476343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.476352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.476370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.476381 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.492269 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:19:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:01Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.577493 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:30:53.353409255 +0000 UTC Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.579343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.579406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.579420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.579441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.579451 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.682403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.682462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.682474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.682499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.682517 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.785715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.785784 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.785799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.785824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.785838 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.889455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.889531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.889547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.889577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.889596 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.992872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.992963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.992978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.993001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:01 crc kubenswrapper[4867]: I0126 11:19:01.993016 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:01Z","lastTransitionTime":"2026-01-26T11:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.095780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.095851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.095863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.095900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.095915 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.159342 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/3.log" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.159905 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/2.log" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.163807 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" exitCode=1 Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.163854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.163889 4867 scope.go:117] "RemoveContainer" containerID="22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.165064 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:19:02 crc kubenswrapper[4867]: E0126 11:19:02.165396 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.185739 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.198651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.198699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.198711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.198729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.198743 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.204819 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.232247 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b7f3763005c5c5c24358d0a42b53a287c54548e8ba4785affb4c90b9c000a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:31Z\\\",\\\"message\\\":\\\"ersions/factory.go:117\\\\nI0126 11:18:31.446431 6551 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 11:18:31.446674 6551 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:18:31.446691 6551 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:18:31.446703 6551 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 11:18:31.446732 6551 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:18:31.446742 6551 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:18:31.446764 6551 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:18:31.446772 6551 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 11:18:31.446779 6551 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:18:31.446791 6551 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:18:31.447312 6551 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:18:31.447355 6551 factory.go:656] Stopping watch factory\\\\nI0126 11:18:31.447369 6551 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:19:01Z\\\",\\\"message\\\":\\\"1.EgressIP event handler 8\\\\nI0126 11:19:01.199913 6990 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 11:19:01.200005 6990 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 11:19:01.200045 6990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:19:01.200056 6990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:19:01.200136 6990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:19:01.200153 6990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:19:01.200167 6990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:19:01.200295 6990 factory.go:656] Stopping watch factory\\\\nI0126 11:19:01.200317 6990 ovnkube.go:599] Stopped ovnkube\\\\nI0126 11:19:01.200361 6990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:19:01.200375 6990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:19:01.200388 6990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:19:01.200396 6990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:19:01.200404 6990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:19:01.200417 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 11:19:01.200481 6990 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:19:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.249052 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.262691 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b812c7c-622a-4255-ae3d-e48a62132126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ca8a196d11248401898d6c6591931638d1ebf8675414d0e588454dbc1da626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.278981 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.294979 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.301287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.301360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.301371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.301411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.301433 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.314321 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.330511 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.346855 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.364545 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.383724 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.401386 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.403986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.404052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.404071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.404095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.404114 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.420527 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.434414 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.449780 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.471398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.490307 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.507410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.507465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.507483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.507507 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.507526 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.526083 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:02Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.563557 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.563633 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.563728 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:02 crc kubenswrapper[4867]: E0126 11:19:02.563803 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.563588 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:02 crc kubenswrapper[4867]: E0126 11:19:02.564162 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:02 crc kubenswrapper[4867]: E0126 11:19:02.564411 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:02 crc kubenswrapper[4867]: E0126 11:19:02.564603 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.578433 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:39:54.035408468 +0000 UTC Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.610378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.610424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.610441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.610463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.610481 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.716542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.716625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.716644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.716675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.716700 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.820551 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.820621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.820639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.820687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.820706 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.923968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.924034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.924052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.924078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:02 crc kubenswrapper[4867]: I0126 11:19:02.924095 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:02Z","lastTransitionTime":"2026-01-26T11:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.027504 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.027569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.027588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.027614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.027632 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.131113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.131188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.131210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.131280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.131306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.168401 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:19:03 crc kubenswrapper[4867]: E0126 11:19:03.168709 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.187604 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.211635 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.228973 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.235114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.235179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.235201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.235282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.235310 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.269406 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.304382 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:19:01Z\\\",\\\"message\\\":\\\"1.EgressIP event handler 8\\\\nI0126 11:19:01.199913 6990 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 11:19:01.200005 6990 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 11:19:01.200045 6990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:19:01.200056 6990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:19:01.200136 6990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:19:01.200153 6990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:19:01.200167 6990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:19:01.200295 6990 factory.go:656] Stopping watch factory\\\\nI0126 11:19:01.200317 6990 ovnkube.go:599] Stopped ovnkube\\\\nI0126 11:19:01.200361 6990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:19:01.200375 6990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:19:01.200388 6990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:19:01.200396 6990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:19:01.200404 6990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:19:01.200417 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 11:19:01.200481 6990 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:19:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.329982 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.339672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.339722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.339733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.339753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.339768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.351989 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b812c7c-622a-4255-ae3d-e48a62132126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ca8a196d11248401898d6c6591931638d1ebf8675414d0e588454dbc1da626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.376504 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.397297 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.417842 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.438722 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.443611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.443666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.443684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.443710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.443733 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.458401 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.477681 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.507960 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.528038 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.546431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.546493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.546514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.546545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.546569 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.560620 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.578761 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:52:52.229005091 +0000 UTC Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.579461 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.597149 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.619995 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:03Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.649256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.649312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.649334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.649361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.649377 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.753017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.753135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.753156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.753184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.753205 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.856139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.856193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.856206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.856244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.856255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.959585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.959642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.959655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.959674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:03 crc kubenswrapper[4867]: I0126 11:19:03.959687 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:03Z","lastTransitionTime":"2026-01-26T11:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.063123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.063205 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.063257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.063292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.063314 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.166650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.166733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.166761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.166804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.166834 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.269830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.269887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.269900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.269923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.269939 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.373145 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.373191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.373202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.373255 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.373273 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.476099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.476143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.476153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.476171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.476181 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.563271 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.563292 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.563345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.563404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:04 crc kubenswrapper[4867]: E0126 11:19:04.563535 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:04 crc kubenswrapper[4867]: E0126 11:19:04.563619 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:04 crc kubenswrapper[4867]: E0126 11:19:04.563775 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:04 crc kubenswrapper[4867]: E0126 11:19:04.563865 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.578348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.578394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.578405 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.578421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.578432 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.579053 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:11:47.957293036 +0000 UTC Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.680563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.680635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.680657 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.680686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.680710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.783509 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.783544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.783555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.783571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.783583 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.887620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.887689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.887714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.887739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.887754 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.990438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.990500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.990519 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.990546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:04 crc kubenswrapper[4867]: I0126 11:19:04.990564 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:04Z","lastTransitionTime":"2026-01-26T11:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.093457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.093514 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.093531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.093557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.093575 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.151420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.151500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.151515 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.151535 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.151550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: E0126 11:19:05.172164 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.178195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.178479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.178555 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.178592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.178617 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.178887 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/3.log" Jan 26 11:19:05 crc kubenswrapper[4867]: E0126 11:19:05.195468 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.199691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.199757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.199773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.199802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.199818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: E0126 11:19:05.213095 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.216553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.216584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.216594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.216612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.216622 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: E0126 11:19:05.229362 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.232976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.233017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.233026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.233041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.233050 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: E0126 11:19:05.246340 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T11:19:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b5a0e2ac-5e06-462a-99e0-d57b8e5cb754\\\",\\\"systemUUID\\\":\\\"db81a289-a49c-46ba-99b1-fd2eecfd5410\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:05Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:05 crc kubenswrapper[4867]: E0126 11:19:05.246471 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.247749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.247810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.247823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.247842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.247855 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.351087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.351140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.351155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.351177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.351193 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.453933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.453974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.453987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.454003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.454013 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.557085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.557153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.557175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.557203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.557282 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.579498 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:41:27.923849623 +0000 UTC Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.660000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.660045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.660060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.660084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.660102 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.762118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.762197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.762250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.762283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.762310 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.865097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.865127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.865135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.865149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.865158 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.968600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.968660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.968673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.968695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:05 crc kubenswrapper[4867]: I0126 11:19:05.968710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:05Z","lastTransitionTime":"2026-01-26T11:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.072124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.072193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.072209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.072248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.072262 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.175949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.176026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.176041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.176064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.176079 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.283333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.283411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.283421 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.283440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.283452 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.386342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.386417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.386441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.386468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.386483 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.490391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.490470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.490495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.490525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.490549 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.563340 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:06 crc kubenswrapper[4867]: E0126 11:19:06.563487 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.563672 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.563734 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.563896 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:06 crc kubenswrapper[4867]: E0126 11:19:06.564059 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:06 crc kubenswrapper[4867]: E0126 11:19:06.564211 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:06 crc kubenswrapper[4867]: E0126 11:19:06.564431 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.580632 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:57:56.807568422 +0000 UTC Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.592895 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.593452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.593565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.593648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.593715 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.696247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.696299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.696311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.696333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.696345 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.799338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.799412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.799448 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.799478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.799502 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.902595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.902915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.903035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.903159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:06 crc kubenswrapper[4867]: I0126 11:19:06.903305 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:06Z","lastTransitionTime":"2026-01-26T11:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.006961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.007007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.007018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.007034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.007047 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.108952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.109017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.109038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.109065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.109084 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.211678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.211722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.211732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.211747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.211758 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.315587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.315673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.315689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.315712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.315727 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.419196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.419302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.419319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.419343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.419360 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.522973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.523028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.523042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.523063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.523077 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.581729 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:47:25.434735355 +0000 UTC Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.625611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.625645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.625653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.625667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.625702 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.728876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.728946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.728969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.728998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.729020 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.832636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.832700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.832709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.832723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.832734 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.935872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.935933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.935954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.935978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:07 crc kubenswrapper[4867]: I0126 11:19:07.935999 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:07Z","lastTransitionTime":"2026-01-26T11:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.039637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.039705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.039729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.039760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.039782 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.142622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.142713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.142730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.142757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.142775 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.245572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.245646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.245668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.245699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.245725 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.350004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.350070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.350092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.350125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.350144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.453412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.453495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.453521 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.453552 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.453575 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.556844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.556895 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.556906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.556930 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.556943 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.563524 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.563547 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:08 crc kubenswrapper[4867]: E0126 11:19:08.563652 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.563521 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.563706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:08 crc kubenswrapper[4867]: E0126 11:19:08.563776 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:08 crc kubenswrapper[4867]: E0126 11:19:08.563910 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:08 crc kubenswrapper[4867]: E0126 11:19:08.563995 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.582525 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:52:54.64833764 +0000 UTC Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.662774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.662822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.662836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.662855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.662869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.766008 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.766049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.766058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.766071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.766080 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.868162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.868198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.868207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.868239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.868248 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.971276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.971336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.971354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.971379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:08 crc kubenswrapper[4867]: I0126 11:19:08.971396 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:08Z","lastTransitionTime":"2026-01-26T11:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.074263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.074318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.074333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.074354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.074369 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.177034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.177093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.177106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.177125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.177139 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.279442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.279483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.279495 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.279512 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.279524 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.382184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.382242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.382251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.382263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.382272 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.485520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.485580 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.485591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.485611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.485647 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.583413 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:11:38.358323558 +0000 UTC Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.589512 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.589557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.589596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.589615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.589628 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.691732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.691770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.691781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.691798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.691809 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.794323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.794591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.794682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.794758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.794843 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.897385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.897527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.897541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.897556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:09 crc kubenswrapper[4867]: I0126 11:19:09.897567 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:09Z","lastTransitionTime":"2026-01-26T11:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.000942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.001325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.001438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.001539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.001623 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.104978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.105039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.105054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.105078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.105096 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.207936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.208558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.208708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.208862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.209030 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.311517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.311563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.311573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.311609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.311620 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.414744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.415096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.415284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.415395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.415690 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.518762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.518809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.518821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.518838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.518851 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.563333 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.563462 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.563508 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:10 crc kubenswrapper[4867]: E0126 11:19:10.563569 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.563645 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:10 crc kubenswrapper[4867]: E0126 11:19:10.563826 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:10 crc kubenswrapper[4867]: E0126 11:19:10.563889 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:10 crc kubenswrapper[4867]: E0126 11:19:10.563981 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.578540 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf285485-1027-4bdc-bdfa-934ef32e7f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://764c348147bb67a611bc5252c49dfe8f586e6a1a6d6a9e9c6674aabcc3028804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1bb2e7344b3822e63a68d366f4821de6e131a4cca163ad67cf44e2b83f9ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzhnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nbvlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.584348 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:15:38.994970162 +0000 UTC Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.595442 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b0f2b9-fac8-442e-89a1-43ebff8d4268\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2140f6b328ef3b937ef0009c1ce35265a18b51c6efb7e8785870affecd68dace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6c8852b9648bd5bee43aee9c3fae16363aeaf1ad05dcaa41a04775784b108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef4b66f160065f550844a298bf29dcd3f12879c1312554968eba3c1b2268303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afa38d4a8ffa664649d154240a8d74ee09bc074127e5edea85ec1de553723fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.610474 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b812c7c-622a-4255-ae3d-e48a62132126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ca8a196d11248401898d6c6591931638d1ebf8675414d0e588454dbc1da626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c958441373fca0b105ec5f119f7a2aca7557f58d49ccba356f824708b3602e3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.621714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.621764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.621777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.621797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.621813 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.628727 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e36e94ce-bdbb-4b65-b38a-d591d99ec132\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 11:17:44.107498 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 11:17:44.109064 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1825304579/tls.crt::/tmp/serving-cert-1825304579/tls.key\\\\\\\"\\\\nI0126 11:17:50.303460 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 11:17:50.308764 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 11:17:50.308805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 11:17:50.308906 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 11:17:50.308924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 11:17:50.322907 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 11:17:50.322946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322953 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 11:17:50.322957 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 11:17:50.322960 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 11:17:50.322963 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 11:17:50.322966 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 11:17:50.323261 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 11:17:50.330430 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.646134 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d7b1047-65d1-4695-b5a5-95ea6a8c3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://237a29b411629c3650250f62fdec7d2c5412763d6cabb2ee0fe8e5e19e320e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d3dc673a8d20531ae5077a1196a368cac64f6afd15c4eb8f3910ee9bf51317\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2ddda5781094e59bb54ddf724fa4f948a047b17b01f0185ef651d90a66f36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.661894 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c572224f36b5a0fc8a86b3e61aa97b96f5bd189d245b31804a11aaa75d04774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.678124 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://613d61cfb62d18a99623d3e4f89763c04f15484b764f8bb0c7a5e4457a3505d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8d05ed772e4273e8962df2230bc8d45db5cfed967165716438d58c54b9e24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.695434 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hn8xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc37e5d1-ba44-4a54-ac36-ab7cdef17212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:18:39Z\\\",\\\"message\\\":\\\"2026-01-26T11:17:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b\\\\n2026-01-26T11:17:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_26c1a4ca-9538-4b99-995f-bb0e07bf0d9b to /host/opt/cni/bin/\\\\n2026-01-26T11:17:54Z [verbose] multus-daemon started\\\\n2026-01-26T11:17:54Z [verbose] Readiness Indicator file check\\\\n2026-01-26T11:18:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hn8xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.713316 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.724834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.724896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.724907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.724926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.724938 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.727243 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8b0139861b48347f9916ca780499486eda856ef7c08cfe6c57201daf085bf61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.745605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0cb57c7-fd32-41c2-b873-a3f017b9f1b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c7012fb0651d46334c26887a02a5c44a8fc67c2ad3539e5321e16b57071b9a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49541a51fced16e579b4cd10875fe7e660d7674508aba3ea44011642241e6556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de0898b43ff7857dd40569f0552c8802c9bb5ecdf3cbd23f552dfb402a02044c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f299de50c4b1a26c4ed82f6f7b17c063e00b8a9de478fa0adafb4abb50195c5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bf54cdee66861e0419af5e4cabd6e8410b771df717f2e958f5eda8d14c47862\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d802b7e0dce47eace09ec80bf6299fbe57b588891ebab7f34de835f6a7314ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77fa1081a2d8d26fe07f92196b85dd5b8eb28583ce5489e2f5cbcfc6fff8780\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9fjlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.758121 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nxkwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ae46dc-30ef-4dfd-b80e-bacd7542634f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6b647bab472836bbf6aebd01d20d186c5a3fb95f20cc9f44ec837d93c7df617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5h2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nxkwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.770844 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024510-edc6-4306-b54b-63facba64419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvcgj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:18:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nmdmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.773582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:10 crc kubenswrapper[4867]: E0126 11:19:10.773764 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:19:10 crc kubenswrapper[4867]: E0126 11:19:10.774293 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs podName:ed024510-edc6-4306-b54b-63facba64419 nodeName:}" failed. No retries permitted until 2026-01-26 11:20:14.774252859 +0000 UTC m=+164.472827769 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs") pod "network-metrics-daemon-nmdmx" (UID: "ed024510-edc6-4306-b54b-63facba64419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.796048 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fbb2033-c5c9-48d1-971f-252b8982e64c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b260f2517962ffaecebf86c2b72c91486b825ca898f907378e8f8cea16d1db92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6843bd212ce4f17d2ae4d53441d56f7be28f8f49c8994458d611159101e193a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2aba77c7c6a2f5450fa60356c3eccf816726f0074d65a1d5805c4278d31157c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e941db67a8021f5eb4eb6e395c2369d052a4cde25d67eb1885768a064185d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc9e90f1b0e6c09e4595a7e450325550012dc52c6a6fc4a95d14b37c0f939ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e22739a3c06bddfda493f25aca23cf47d71c8bde27a6b4bb505592b42350752\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b44caec3547caebca126742ca337ad1851edc3e9474127528a9e5184b5ec23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f7cf77acd26e7af095c4a5432efde85eef6dd5e434141cb5d6abd12770968e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.808551 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"115cad9f-057f-4e63-b408-8fa7a358a191\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff5997c5ecb7b6a4257e45de25c7b8f3cbe39cc5efe49cf2e2baff5447a947b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wqzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g6cth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.822520 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.827781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.827862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.827882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.827908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.827928 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.840709 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.853982 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wmdmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad862fa-4af9-49f7-a629-ebf54a83ca45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726513fd6aefd76b58a4c8cf08fa4588aadcdb02e8dbe3bb4f23004f11eb29dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6hvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wmdmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.875786 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a3be637-cf04-4c55-bf72-67fdad83cc44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T11:17:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T11:19:01Z\\\",\\\"message\\\":\\\"1.EgressIP event handler 8\\\\nI0126 11:19:01.199913 6990 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 11:19:01.200005 6990 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 11:19:01.200045 6990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 11:19:01.200056 6990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 11:19:01.200136 6990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 11:19:01.200153 6990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 11:19:01.200167 6990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 11:19:01.200295 6990 factory.go:656] Stopping watch factory\\\\nI0126 11:19:01.200317 6990 ovnkube.go:599] Stopped ovnkube\\\\nI0126 11:19:01.200361 6990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 11:19:01.200375 6990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 11:19:01.200388 6990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 11:19:01.200396 6990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 11:19:01.200404 6990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 11:19:01.200417 6990 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 11:19:01.200481 6990 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T11:19:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T11:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T11:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T11:17:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cn6f7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T11:17:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-p8ngn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T11:19:10Z is after 2025-08-24T17:21:41Z" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.931462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.931538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.931556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.931582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:10 crc kubenswrapper[4867]: I0126 11:19:10.931597 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:10Z","lastTransitionTime":"2026-01-26T11:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.034378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.034465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.034478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.034499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.034513 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.138298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.138345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.138354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.138374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.138385 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.241195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.241270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.241292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.241311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.241323 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.344024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.344605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.344779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.344931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.345077 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.449020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.449060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.449074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.449093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.449104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.551158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.551198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.551209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.551244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.551255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.585376 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:31:10.30849767 +0000 UTC Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.654245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.654317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.654331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.654352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.654364 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.758001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.758051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.758064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.758108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.758120 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.861012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.861076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.861092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.861111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.861125 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.963607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.963665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.963681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.963701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:11 crc kubenswrapper[4867]: I0126 11:19:11.963711 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:11Z","lastTransitionTime":"2026-01-26T11:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.067357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.067419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.067432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.067451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.067464 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.170076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.170153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.170166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.170182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.170191 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.273344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.273429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.273449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.273474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.273488 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.376702 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.376767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.376785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.376809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.376828 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.480487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.480554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.480576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.480606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.480631 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.563801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.563828 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.563914 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:12 crc kubenswrapper[4867]: E0126 11:19:12.564179 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.564303 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:12 crc kubenswrapper[4867]: E0126 11:19:12.564534 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:12 crc kubenswrapper[4867]: E0126 11:19:12.564648 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:12 crc kubenswrapper[4867]: E0126 11:19:12.564775 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.583256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.583321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.583331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.583354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.583368 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.586878 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:31:56.80121518 +0000 UTC Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.685638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.685725 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.685742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.685766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.685818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.789010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.789050 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.789060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.789078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.789090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.891492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.891533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.891544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.891561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.891573 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.994563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.994602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.994610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.994627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:12 crc kubenswrapper[4867]: I0126 11:19:12.994639 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:12Z","lastTransitionTime":"2026-01-26T11:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.097934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.097995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.098010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.098035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.098052 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.201195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.201244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.201252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.201269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.201279 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.303948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.304375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.304491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.304605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.304702 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.408013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.408366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.408449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.408517 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.408590 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.511409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.511487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.511499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.511518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.511530 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.587451 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:42:35.967779784 +0000 UTC Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.613918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.613996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.614009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.614033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.614048 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.717382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.717450 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.717460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.717481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.717496 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.820433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.820480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.820491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.820506 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.820518 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.923664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.923722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.923732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.923751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:13 crc kubenswrapper[4867]: I0126 11:19:13.923764 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:13Z","lastTransitionTime":"2026-01-26T11:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.026096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.026141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.026151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.026166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.026178 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.129066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.129104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.129113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.129129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.129141 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.232265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.232323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.232336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.232355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.232368 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.334959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.335024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.335058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.335086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.335107 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.437609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.437683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.437705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.437734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.437756 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.541348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.541700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.541789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.541881 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.541963 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.563068 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.563189 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:14 crc kubenswrapper[4867]: E0126 11:19:14.563254 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:14 crc kubenswrapper[4867]: E0126 11:19:14.563345 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.563065 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.563425 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:14 crc kubenswrapper[4867]: E0126 11:19:14.563451 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:14 crc kubenswrapper[4867]: E0126 11:19:14.563633 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.587914 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:57:40.221319158 +0000 UTC Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.644979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.645039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.645051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.645073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.645087 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.748741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.748778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.748789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.748804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.748815 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.851669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.851721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.851735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.851755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.851770 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.954723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.954817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.954836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.954861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:14 crc kubenswrapper[4867]: I0126 11:19:14.954909 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:14Z","lastTransitionTime":"2026-01-26T11:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.058146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.058294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.058320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.058395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.058422 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:15Z","lastTransitionTime":"2026-01-26T11:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.161105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.161152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.161164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.161179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.161189 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:15Z","lastTransitionTime":"2026-01-26T11:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.264001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.264044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.264057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.264072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.264082 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:15Z","lastTransitionTime":"2026-01-26T11:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.367334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.367385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.367397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.367417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.367453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:15Z","lastTransitionTime":"2026-01-26T11:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.469761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.469804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.469812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.469826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.469837 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:15Z","lastTransitionTime":"2026-01-26T11:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.556240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.556277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.556287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.556302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.556312 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:15Z","lastTransitionTime":"2026-01-26T11:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.578951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.579375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.579527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.579618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.579731 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T11:19:15Z","lastTransitionTime":"2026-01-26T11:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.588938 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:35:54.343857132 +0000 UTC Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.589625 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.605492 4867 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.607966 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn"] Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.608648 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.611281 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.612061 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.614906 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.615448 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.663685 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.663665383 podStartE2EDuration="1m24.663665383s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.646539424 +0000 UTC m=+105.345114334" watchObservedRunningTime="2026-01-26 11:19:15.663665383 +0000 UTC m=+105.362240293" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.663902 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podStartSLOduration=84.66389771 podStartE2EDuration="1m24.66389771s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.663602761 +0000 UTC m=+105.362177671" watchObservedRunningTime="2026-01-26 11:19:15.66389771 +0000 UTC m=+105.362472620" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.731892 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.731966 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.732050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.732287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.732357 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.742109 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wmdmh" podStartSLOduration=84.742072364 podStartE2EDuration="1m24.742072364s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.741319335 +0000 UTC m=+105.439894275" watchObservedRunningTime="2026-01-26 11:19:15.742072364 +0000 UTC m=+105.440647294" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.773660 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.773639495 podStartE2EDuration="1m25.773639495s" podCreationTimestamp="2026-01-26 11:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.758768292 +0000 UTC m=+105.457343202" watchObservedRunningTime="2026-01-26 11:19:15.773639495 +0000 UTC m=+105.472214405" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.810740 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hn8xr" podStartSLOduration=84.810708315 podStartE2EDuration="1m24.810708315s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.810052289 +0000 UTC m=+105.508627199" watchObservedRunningTime="2026-01-26 11:19:15.810708315 +0000 UTC m=+105.509283225" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.826024 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nbvlt" podStartSLOduration=83.826002867 podStartE2EDuration="1m23.826002867s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.824078478 +0000 UTC m=+105.522653388" watchObservedRunningTime="2026-01-26 11:19:15.826002867 +0000 UTC m=+105.524577777" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.833908 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.833966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.833990 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.834024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.834056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.834125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.834123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.835234 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.844980 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.859026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6fvjn\" (UID: \"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.866069 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.866048115 podStartE2EDuration="31.866048115s" podCreationTimestamp="2026-01-26 11:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.865808109 +0000 UTC m=+105.564383019" watchObservedRunningTime="2026-01-26 11:19:15.866048115 +0000 UTC m=+105.564623025" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.866671 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.866665581 podStartE2EDuration="56.866665581s" podCreationTimestamp="2026-01-26 11:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.850378603 +0000 UTC m=+105.548953533" watchObservedRunningTime="2026-01-26 11:19:15.866665581 +0000 UTC m=+105.565240491" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.896651 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nxkwj" podStartSLOduration=84.896627759 podStartE2EDuration="1m24.896627759s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.896253449 +0000 UTC m=+105.594828379" watchObservedRunningTime="2026-01-26 11:19:15.896627759 +0000 UTC m=+105.595202669" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.896854 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.896848075 podStartE2EDuration="1m25.896848075s" podCreationTimestamp="2026-01-26 11:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.883915533 +0000 UTC m=+105.582490443" watchObservedRunningTime="2026-01-26 11:19:15.896848075 +0000 UTC m=+105.595422985" Jan 26 11:19:15 crc kubenswrapper[4867]: I0126 11:19:15.923310 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" Jan 26 11:19:16 crc kubenswrapper[4867]: I0126 11:19:16.223574 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" event={"ID":"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a","Type":"ContainerStarted","Data":"a32b0dec693ab70403dfeaadee3839475261b4f438090fddd65cfdd48e444b4b"} Jan 26 11:19:16 crc kubenswrapper[4867]: I0126 11:19:16.223633 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" event={"ID":"cd2d9354-657f-4dcf-a4d7-8d9ffdf24c0a","Type":"ContainerStarted","Data":"007bf7932e54a49ef5d8835873b9cfd1e4bd4b2b2f5c6fd77a666c6d363426a6"} Jan 26 11:19:16 crc kubenswrapper[4867]: I0126 11:19:16.241458 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9fjlf" podStartSLOduration=85.241439764 podStartE2EDuration="1m25.241439764s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:15.978593102 +0000 UTC m=+105.677168012" watchObservedRunningTime="2026-01-26 11:19:16.241439764 +0000 UTC m=+105.940014674" Jan 26 11:19:16 crc kubenswrapper[4867]: I0126 11:19:16.563869 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:16 crc kubenswrapper[4867]: I0126 11:19:16.564013 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:16 crc kubenswrapper[4867]: I0126 11:19:16.564047 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:16 crc kubenswrapper[4867]: I0126 11:19:16.564243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:16 crc kubenswrapper[4867]: E0126 11:19:16.564208 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:16 crc kubenswrapper[4867]: E0126 11:19:16.564401 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:16 crc kubenswrapper[4867]: E0126 11:19:16.564444 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:16 crc kubenswrapper[4867]: E0126 11:19:16.564551 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:17 crc kubenswrapper[4867]: I0126 11:19:17.564784 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:19:17 crc kubenswrapper[4867]: E0126 11:19:17.565011 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:19:18 crc kubenswrapper[4867]: I0126 11:19:18.562826 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:18 crc kubenswrapper[4867]: I0126 11:19:18.562931 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:18 crc kubenswrapper[4867]: I0126 11:19:18.563078 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:18 crc kubenswrapper[4867]: E0126 11:19:18.563066 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:18 crc kubenswrapper[4867]: I0126 11:19:18.562863 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:18 crc kubenswrapper[4867]: E0126 11:19:18.563216 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:18 crc kubenswrapper[4867]: E0126 11:19:18.563353 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:18 crc kubenswrapper[4867]: E0126 11:19:18.563454 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:20 crc kubenswrapper[4867]: I0126 11:19:20.563508 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:20 crc kubenswrapper[4867]: I0126 11:19:20.563677 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:20 crc kubenswrapper[4867]: E0126 11:19:20.566698 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:20 crc kubenswrapper[4867]: I0126 11:19:20.567030 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:20 crc kubenswrapper[4867]: I0126 11:19:20.567094 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:20 crc kubenswrapper[4867]: E0126 11:19:20.567192 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:20 crc kubenswrapper[4867]: E0126 11:19:20.567491 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:20 crc kubenswrapper[4867]: E0126 11:19:20.567750 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:22 crc kubenswrapper[4867]: I0126 11:19:22.563415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:22 crc kubenswrapper[4867]: I0126 11:19:22.563493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:22 crc kubenswrapper[4867]: I0126 11:19:22.563616 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:22 crc kubenswrapper[4867]: E0126 11:19:22.563611 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:22 crc kubenswrapper[4867]: I0126 11:19:22.563453 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:22 crc kubenswrapper[4867]: E0126 11:19:22.563759 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:22 crc kubenswrapper[4867]: E0126 11:19:22.563867 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:22 crc kubenswrapper[4867]: E0126 11:19:22.563957 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:24 crc kubenswrapper[4867]: I0126 11:19:24.562926 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:24 crc kubenswrapper[4867]: I0126 11:19:24.563046 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:24 crc kubenswrapper[4867]: E0126 11:19:24.563098 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:24 crc kubenswrapper[4867]: I0126 11:19:24.563095 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:24 crc kubenswrapper[4867]: E0126 11:19:24.563125 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:24 crc kubenswrapper[4867]: E0126 11:19:24.563187 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:24 crc kubenswrapper[4867]: I0126 11:19:24.563272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:24 crc kubenswrapper[4867]: E0126 11:19:24.563325 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.258603 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/1.log" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.259212 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/0.log" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.259282 4867 generic.go:334] "Generic (PLEG): container finished" podID="dc37e5d1-ba44-4a54-ac36-ab7cdef17212" containerID="0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866" exitCode=1 Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.259326 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerDied","Data":"0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866"} Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.259366 4867 scope.go:117] "RemoveContainer" containerID="519d5416896aca5923540078a8bd13f39a190ad4acda3d0bfb3375a5dbfe6b80" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.260078 4867 scope.go:117] "RemoveContainer" containerID="0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866" Jan 26 11:19:26 crc kubenswrapper[4867]: E0126 11:19:26.260422 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hn8xr_openshift-multus(dc37e5d1-ba44-4a54-ac36-ab7cdef17212)\"" pod="openshift-multus/multus-hn8xr" podUID="dc37e5d1-ba44-4a54-ac36-ab7cdef17212" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.283040 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fvjn" podStartSLOduration=95.283022432 podStartE2EDuration="1m35.283022432s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:16.242734896 +0000 UTC m=+105.941309806" watchObservedRunningTime="2026-01-26 11:19:26.283022432 +0000 UTC m=+115.981597342" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.563191 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.563186 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.563346 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:26 crc kubenswrapper[4867]: I0126 11:19:26.563579 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:26 crc kubenswrapper[4867]: E0126 11:19:26.563556 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:26 crc kubenswrapper[4867]: E0126 11:19:26.563668 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:26 crc kubenswrapper[4867]: E0126 11:19:26.563748 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:26 crc kubenswrapper[4867]: E0126 11:19:26.563811 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:27 crc kubenswrapper[4867]: I0126 11:19:27.274732 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/1.log" Jan 26 11:19:28 crc kubenswrapper[4867]: I0126 11:19:28.563849 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:28 crc kubenswrapper[4867]: I0126 11:19:28.564005 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:28 crc kubenswrapper[4867]: E0126 11:19:28.564090 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:28 crc kubenswrapper[4867]: I0126 11:19:28.563887 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:28 crc kubenswrapper[4867]: E0126 11:19:28.564277 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:28 crc kubenswrapper[4867]: E0126 11:19:28.564392 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:28 crc kubenswrapper[4867]: I0126 11:19:28.564048 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:28 crc kubenswrapper[4867]: E0126 11:19:28.564519 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:29 crc kubenswrapper[4867]: I0126 11:19:29.564275 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:19:29 crc kubenswrapper[4867]: E0126 11:19:29.564473 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-p8ngn_openshift-ovn-kubernetes(4a3be637-cf04-4c55-bf72-67fdad83cc44)\"" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" Jan 26 11:19:30 crc kubenswrapper[4867]: E0126 11:19:30.547325 4867 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 11:19:30 crc kubenswrapper[4867]: I0126 11:19:30.563670 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:30 crc kubenswrapper[4867]: I0126 11:19:30.563735 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:30 crc kubenswrapper[4867]: I0126 11:19:30.563720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:30 crc kubenswrapper[4867]: I0126 11:19:30.563694 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:30 crc kubenswrapper[4867]: E0126 11:19:30.565873 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:30 crc kubenswrapper[4867]: E0126 11:19:30.566121 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:30 crc kubenswrapper[4867]: E0126 11:19:30.566482 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:30 crc kubenswrapper[4867]: E0126 11:19:30.566681 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:30 crc kubenswrapper[4867]: E0126 11:19:30.698351 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 11:19:32 crc kubenswrapper[4867]: I0126 11:19:32.562735 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:32 crc kubenswrapper[4867]: E0126 11:19:32.562982 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:32 crc kubenswrapper[4867]: I0126 11:19:32.563111 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:32 crc kubenswrapper[4867]: I0126 11:19:32.563104 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:32 crc kubenswrapper[4867]: I0126 11:19:32.563371 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:32 crc kubenswrapper[4867]: E0126 11:19:32.563496 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:32 crc kubenswrapper[4867]: E0126 11:19:32.563597 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:32 crc kubenswrapper[4867]: E0126 11:19:32.563649 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:34 crc kubenswrapper[4867]: I0126 11:19:34.563669 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:34 crc kubenswrapper[4867]: I0126 11:19:34.563806 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:34 crc kubenswrapper[4867]: E0126 11:19:34.563837 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:34 crc kubenswrapper[4867]: I0126 11:19:34.563879 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:34 crc kubenswrapper[4867]: I0126 11:19:34.563900 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:34 crc kubenswrapper[4867]: E0126 11:19:34.564060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:34 crc kubenswrapper[4867]: E0126 11:19:34.564147 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:34 crc kubenswrapper[4867]: E0126 11:19:34.564295 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:35 crc kubenswrapper[4867]: E0126 11:19:35.702162 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 11:19:36 crc kubenswrapper[4867]: I0126 11:19:36.563344 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:36 crc kubenswrapper[4867]: I0126 11:19:36.563420 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:36 crc kubenswrapper[4867]: I0126 11:19:36.563352 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:36 crc kubenswrapper[4867]: I0126 11:19:36.563352 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:36 crc kubenswrapper[4867]: E0126 11:19:36.563595 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:36 crc kubenswrapper[4867]: E0126 11:19:36.563810 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:36 crc kubenswrapper[4867]: E0126 11:19:36.563899 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:36 crc kubenswrapper[4867]: E0126 11:19:36.563726 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:38 crc kubenswrapper[4867]: I0126 11:19:38.563367 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:38 crc kubenswrapper[4867]: I0126 11:19:38.563423 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:38 crc kubenswrapper[4867]: I0126 11:19:38.563541 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:38 crc kubenswrapper[4867]: E0126 11:19:38.563529 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:38 crc kubenswrapper[4867]: I0126 11:19:38.563587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:38 crc kubenswrapper[4867]: E0126 11:19:38.563819 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:38 crc kubenswrapper[4867]: E0126 11:19:38.563928 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:38 crc kubenswrapper[4867]: E0126 11:19:38.564144 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:40 crc kubenswrapper[4867]: I0126 11:19:40.563248 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:40 crc kubenswrapper[4867]: I0126 11:19:40.563360 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:40 crc kubenswrapper[4867]: E0126 11:19:40.564566 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:40 crc kubenswrapper[4867]: I0126 11:19:40.564671 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:40 crc kubenswrapper[4867]: I0126 11:19:40.564721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:40 crc kubenswrapper[4867]: E0126 11:19:40.565110 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:40 crc kubenswrapper[4867]: E0126 11:19:40.565256 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:40 crc kubenswrapper[4867]: E0126 11:19:40.565356 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:40 crc kubenswrapper[4867]: I0126 11:19:40.565790 4867 scope.go:117] "RemoveContainer" containerID="0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866" Jan 26 11:19:40 crc kubenswrapper[4867]: E0126 11:19:40.702760 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 11:19:42 crc kubenswrapper[4867]: I0126 11:19:42.564123 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:42 crc kubenswrapper[4867]: I0126 11:19:42.564272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:42 crc kubenswrapper[4867]: E0126 11:19:42.564304 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:42 crc kubenswrapper[4867]: E0126 11:19:42.564481 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:42 crc kubenswrapper[4867]: I0126 11:19:42.564552 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:42 crc kubenswrapper[4867]: E0126 11:19:42.564611 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:42 crc kubenswrapper[4867]: I0126 11:19:42.564736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:42 crc kubenswrapper[4867]: E0126 11:19:42.564787 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:43 crc kubenswrapper[4867]: I0126 11:19:43.333978 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/1.log" Jan 26 11:19:43 crc kubenswrapper[4867]: I0126 11:19:43.334040 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerStarted","Data":"7e93ce40d6288a12790286c1b7deac52b0558ebca01037040f2f116daaed2f03"} Jan 26 11:19:43 crc kubenswrapper[4867]: I0126 11:19:43.564691 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:19:44 crc kubenswrapper[4867]: I0126 11:19:44.563145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:44 crc kubenswrapper[4867]: I0126 11:19:44.563355 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:44 crc kubenswrapper[4867]: E0126 11:19:44.563835 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:44 crc kubenswrapper[4867]: I0126 11:19:44.563906 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:44 crc kubenswrapper[4867]: I0126 11:19:44.563966 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:44 crc kubenswrapper[4867]: E0126 11:19:44.564143 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:44 crc kubenswrapper[4867]: E0126 11:19:44.564248 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:44 crc kubenswrapper[4867]: E0126 11:19:44.564367 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:45 crc kubenswrapper[4867]: I0126 11:19:45.343837 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/3.log" Jan 26 11:19:45 crc kubenswrapper[4867]: I0126 11:19:45.346929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerStarted","Data":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} Jan 26 11:19:45 crc kubenswrapper[4867]: I0126 11:19:45.347349 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:19:45 crc kubenswrapper[4867]: I0126 11:19:45.380288 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podStartSLOduration=114.380211714 podStartE2EDuration="1m54.380211714s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:19:45.379259319 +0000 UTC m=+135.077834229" watchObservedRunningTime="2026-01-26 11:19:45.380211714 +0000 UTC m=+135.078786634" Jan 26 11:19:45 crc kubenswrapper[4867]: E0126 11:19:45.704105 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 11:19:45 crc kubenswrapper[4867]: I0126 11:19:45.877547 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nmdmx"] Jan 26 11:19:45 crc kubenswrapper[4867]: I0126 11:19:45.877716 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:45 crc kubenswrapper[4867]: E0126 11:19:45.877850 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:46 crc kubenswrapper[4867]: I0126 11:19:46.563823 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:46 crc kubenswrapper[4867]: I0126 11:19:46.563884 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:46 crc kubenswrapper[4867]: I0126 11:19:46.563957 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:46 crc kubenswrapper[4867]: E0126 11:19:46.564201 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:46 crc kubenswrapper[4867]: E0126 11:19:46.564261 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:46 crc kubenswrapper[4867]: E0126 11:19:46.564324 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:47 crc kubenswrapper[4867]: I0126 11:19:47.563586 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:47 crc kubenswrapper[4867]: E0126 11:19:47.563768 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:48 crc kubenswrapper[4867]: I0126 11:19:48.563418 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:48 crc kubenswrapper[4867]: I0126 11:19:48.563461 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:48 crc kubenswrapper[4867]: E0126 11:19:48.563959 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:48 crc kubenswrapper[4867]: I0126 11:19:48.563623 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:48 crc kubenswrapper[4867]: E0126 11:19:48.564113 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:48 crc kubenswrapper[4867]: E0126 11:19:48.564254 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:49 crc kubenswrapper[4867]: I0126 11:19:49.563286 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:49 crc kubenswrapper[4867]: E0126 11:19:49.563489 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nmdmx" podUID="ed024510-edc6-4306-b54b-63facba64419" Jan 26 11:19:50 crc kubenswrapper[4867]: I0126 11:19:50.563404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:50 crc kubenswrapper[4867]: I0126 11:19:50.563484 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:50 crc kubenswrapper[4867]: I0126 11:19:50.563451 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:50 crc kubenswrapper[4867]: E0126 11:19:50.564688 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 11:19:50 crc kubenswrapper[4867]: E0126 11:19:50.564866 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 11:19:50 crc kubenswrapper[4867]: E0126 11:19:50.565021 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 11:19:51 crc kubenswrapper[4867]: I0126 11:19:51.563382 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:19:51 crc kubenswrapper[4867]: I0126 11:19:51.566111 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 11:19:51 crc kubenswrapper[4867]: I0126 11:19:51.568970 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 11:19:52 crc kubenswrapper[4867]: I0126 11:19:52.563209 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:52 crc kubenswrapper[4867]: I0126 11:19:52.563282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:52 crc kubenswrapper[4867]: I0126 11:19:52.563650 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:52 crc kubenswrapper[4867]: I0126 11:19:52.567384 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 11:19:52 crc kubenswrapper[4867]: I0126 11:19:52.567432 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 11:19:52 crc kubenswrapper[4867]: I0126 11:19:52.567547 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 11:19:52 crc kubenswrapper[4867]: I0126 11:19:52.567432 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.413022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.491158 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9jc4l"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.492282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.492458 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jvs97"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.493321 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: W0126 11:19:56.494895 4867 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 26 11:19:56 crc kubenswrapper[4867]: E0126 11:19:56.494952 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 11:19:56 crc kubenswrapper[4867]: W0126 11:19:56.495195 4867 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 26 11:19:56 crc kubenswrapper[4867]: E0126 11:19:56.495290 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.496381 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: W0126 11:19:56.503104 4867 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 26 11:19:56 crc kubenswrapper[4867]: E0126 11:19:56.503196 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.503433 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.503810 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.504599 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.504874 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.505176 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.505324 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: W0126 11:19:56.505401 4867 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 26 11:19:56 crc kubenswrapper[4867]: E0126 11:19:56.505440 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.505482 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pb5rg"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.506063 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.506153 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 11:19:56 crc kubenswrapper[4867]: W0126 11:19:56.506440 4867 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 26 11:19:56 crc kubenswrapper[4867]: E0126 11:19:56.506498 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.506526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.506831 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.506967 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.507605 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.507702 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.511613 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.512326 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.512573 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.512626 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.512728 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.513311 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.561789 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.561800 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.561897 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.561912 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.561805 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.561820 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.564353 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.564568 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.564715 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.564998 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565262 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565285 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565442 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565548 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565603 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565718 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565778 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565790 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565730 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565922 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565992 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.566030 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.565951 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.566187 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.567934 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.568062 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.568262 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.567944 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.569019 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.587391 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.606211 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dc94j"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.606798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.607041 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.607330 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.607628 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.615607 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.616293 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n9jb5"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.617047 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jczt5"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.617585 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.618683 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.620444 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.620673 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.620863 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.621032 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.621412 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.621623 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.622487 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.625570 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.627704 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bmqm4"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.628572 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.629128 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.629650 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.639570 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.645845 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.648345 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ltvwb"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.648866 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.649209 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.649416 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ltvwb" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.649986 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.654783 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.654828 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.655115 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.655427 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.659836 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.664653 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.664961 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.665059 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.664765 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.664919 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.665388 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.665412 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.665516 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.665890 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.665905 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.665975 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666131 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666289 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666401 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4hjn2"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666459 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666605 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666740 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666751 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666828 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666943 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666963 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.666985 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.667056 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.667065 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gn8gp"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.667169 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.667311 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.667596 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-skdxp"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.667860 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dmt7q"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.667896 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.668351 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.668413 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.668703 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.671414 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.672077 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.673866 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.674762 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.678363 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9jc4l"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.685338 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.686878 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.687330 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.687442 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.687545 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.688212 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.688408 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.688788 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.689088 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.689693 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.689927 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.689954 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.690054 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.690162 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.690306 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.690483 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.709661 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.717883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.718074 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.735182 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.735460 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.735804 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ksw9r"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737024 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6203c5b2-2d8f-46c5-a31c-59190d111d7d-audit-dir\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737071 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0880ba0d-8774-4012-ae45-24997c78c5ca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737101 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-encryption-config\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-service-ca\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737152 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec332d74-71c9-4401-8dfa-8674dc431b82-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737176 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-config\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737199 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737242 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6203c5b2-2d8f-46c5-a31c-59190d111d7d-node-pullsecrets\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95f962fb-c0fe-4583-8d7f-cac4f22110e9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737300 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4874120-574e-4f70-a7d9-5c6c91e41f41-audit-dir\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737323 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-config\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737348 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-client\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737377 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737398 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmnl6\" (UniqueName: \"kubernetes.io/projected/95f962fb-c0fe-4583-8d7f-cac4f22110e9-kube-api-access-xmnl6\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737421 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-audit\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737443 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-oauth-config\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-trusted-ca-bundle\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737556 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737602 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b2fcd86-878c-4bce-a720-460a61585e50-auth-proxy-config\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0880ba0d-8774-4012-ae45-24997c78c5ca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737649 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-etcd-client\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737673 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94dvd\" (UniqueName: \"kubernetes.io/projected/ec332d74-71c9-4401-8dfa-8674dc431b82-kube-api-access-94dvd\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737696 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-config\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b207fdfd-306c-4494-8c1f-560dd155cd7a-config\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737745 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-image-import-ca\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737768 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lpb\" (UniqueName: \"kubernetes.io/projected/a721247b-3436-4bb4-bc5c-ab4e94db0b41-kube-api-access-n6lpb\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737791 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctg5\" (UniqueName: \"kubernetes.io/projected/0880ba0d-8774-4012-ae45-24997c78c5ca-kube-api-access-kctg5\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.737916 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2fcd86-878c-4bce-a720-460a61585e50-config\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738002 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b207fdfd-306c-4494-8c1f-560dd155cd7a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738025 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738219 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-serving-ca\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8982t\" (UniqueName: \"kubernetes.io/projected/a4874120-574e-4f70-a7d9-5c6c91e41f41-kube-api-access-8982t\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-serving-cert\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-encryption-config\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738377 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-oauth-serving-cert\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8pn5\" (UniqueName: \"kubernetes.io/projected/6670fa93-70e2-4047-b449-1bf939336210-kube-api-access-d8pn5\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738526 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-client-ca\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738558 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0880ba0d-8774-4012-ae45-24997c78c5ca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec332d74-71c9-4401-8dfa-8674dc431b82-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-audit-policies\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738702 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjg96\" (UniqueName: \"kubernetes.io/projected/6203c5b2-2d8f-46c5-a31c-59190d111d7d-kube-api-access-xjg96\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738745 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-serving-cert\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738768 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcgm\" (UniqueName: \"kubernetes.io/projected/3b2fcd86-878c-4bce-a720-460a61585e50-kube-api-access-nbcgm\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738793 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjh9\" (UniqueName: \"kubernetes.io/projected/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-kube-api-access-jpjh9\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-serving-cert\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738849 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b207fdfd-306c-4494-8c1f-560dd155cd7a-images\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738875 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fmsr\" (UniqueName: \"kubernetes.io/projected/b207fdfd-306c-4494-8c1f-560dd155cd7a-kube-api-access-9fmsr\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738906 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-config\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738933 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95f962fb-c0fe-4583-8d7f-cac4f22110e9-serving-cert\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738963 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b2fcd86-878c-4bce-a720-460a61585e50-machine-approver-tls\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.738985 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-serving-cert\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.739011 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.740360 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.741322 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.742302 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s6j6d"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.742914 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.743372 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.743929 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.744290 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.748308 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.749445 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.749573 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.751492 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.755052 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dc94j"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.757095 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.757105 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.757826 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.757963 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.760368 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.761265 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.763750 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jvs97"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.770126 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.774257 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrqxh"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.775241 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.781569 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.782620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.786220 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.787291 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.789298 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.790064 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.790740 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.791051 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.793212 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.800501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.800709 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.809799 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.812626 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.815312 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.818900 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.820006 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.821812 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.822745 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.822867 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.823776 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n9jb5"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.825291 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.826289 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.827833 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ltvwb"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.829590 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.830129 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.831693 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pb5rg"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.833470 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qhjqn"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.835305 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.835438 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.835464 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8rqgh"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.835656 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.836786 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4hjn2"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.836843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.838101 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gn8gp"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.839586 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jczt5"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840149 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840198 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12eb0c01-c4f3-489f-87dd-bbc03f111814-metrics-tls\") pod \"dns-operator-744455d44c-4hjn2\" (UID: \"12eb0c01-c4f3-489f-87dd-bbc03f111814\") " pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840247 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-audit-policies\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjg96\" (UniqueName: \"kubernetes.io/projected/6203c5b2-2d8f-46c5-a31c-59190d111d7d-kube-api-access-xjg96\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-serving-cert\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840334 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcgm\" (UniqueName: \"kubernetes.io/projected/3b2fcd86-878c-4bce-a720-460a61585e50-kube-api-access-nbcgm\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840357 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpjh9\" (UniqueName: \"kubernetes.io/projected/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-kube-api-access-jpjh9\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b207fdfd-306c-4494-8c1f-560dd155cd7a-images\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840416 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fmsr\" (UniqueName: \"kubernetes.io/projected/b207fdfd-306c-4494-8c1f-560dd155cd7a-kube-api-access-9fmsr\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-serving-cert\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840454 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-config\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840516 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27074e02-cda1-4d86-bef7-69aafc47ad94-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b2fcd86-878c-4bce-a720-460a61585e50-machine-approver-tls\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840554 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-serving-cert\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95f962fb-c0fe-4583-8d7f-cac4f22110e9-serving-cert\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840606 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840629 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840665 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6203c5b2-2d8f-46c5-a31c-59190d111d7d-audit-dir\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0880ba0d-8774-4012-ae45-24997c78c5ca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-encryption-config\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840762 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840780 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-service-ca\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840816 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840838 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27074e02-cda1-4d86-bef7-69aafc47ad94-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec332d74-71c9-4401-8dfa-8674dc431b82-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-config\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840923 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6203c5b2-2d8f-46c5-a31c-59190d111d7d-node-pullsecrets\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.840977 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841032 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95f962fb-c0fe-4583-8d7f-cac4f22110e9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841073 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27074e02-cda1-4d86-bef7-69aafc47ad94-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841096 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4874120-574e-4f70-a7d9-5c6c91e41f41-audit-dir\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841135 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-config\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841156 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-client\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841180 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841250 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-audit\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-oauth-config\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841309 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmnl6\" (UniqueName: \"kubernetes.io/projected/95f962fb-c0fe-4583-8d7f-cac4f22110e9-kube-api-access-xmnl6\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841338 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-trusted-ca-bundle\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841398 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0880ba0d-8774-4012-ae45-24997c78c5ca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841403 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841419 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-policies\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841509 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841551 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4874120-574e-4f70-a7d9-5c6c91e41f41-audit-dir\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.841629 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6203c5b2-2d8f-46c5-a31c-59190d111d7d-node-pullsecrets\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.842335 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95f962fb-c0fe-4583-8d7f-cac4f22110e9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.842839 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.842908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-audit-policies\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.843051 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec332d74-71c9-4401-8dfa-8674dc431b82-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.843153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b207fdfd-306c-4494-8c1f-560dd155cd7a-images\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.843440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-audit\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.844086 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-config\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.844514 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0880ba0d-8774-4012-ae45-24997c78c5ca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845286 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6203c5b2-2d8f-46c5-a31c-59190d111d7d-audit-dir\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845360 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b2fcd86-878c-4bce-a720-460a61585e50-auth-proxy-config\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845510 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94dvd\" (UniqueName: \"kubernetes.io/projected/ec332d74-71c9-4401-8dfa-8674dc431b82-kube-api-access-94dvd\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845578 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-etcd-client\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-trusted-ca-bundle\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845607 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b207fdfd-306c-4494-8c1f-560dd155cd7a-config\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845751 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-config\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-image-import-ca\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6lpb\" (UniqueName: \"kubernetes.io/projected/a721247b-3436-4bb4-bc5c-ab4e94db0b41-kube-api-access-n6lpb\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.845967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-config\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.846140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b2fcd86-878c-4bce-a720-460a61585e50-auth-proxy-config\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.847003 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b207fdfd-306c-4494-8c1f-560dd155cd7a-config\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.847005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2fcd86-878c-4bce-a720-460a61585e50-config\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.847184 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-config\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.847566 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nmb9m"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.847593 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-image-import-ca\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.847717 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4874120-574e-4f70-a7d9-5c6c91e41f41-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.847197 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b207fdfd-306c-4494-8c1f-560dd155cd7a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.848239 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.848376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kctg5\" (UniqueName: \"kubernetes.io/projected/0880ba0d-8774-4012-ae45-24997c78c5ca-kube-api-access-kctg5\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.848529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.848716 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlgh\" (UniqueName: \"kubernetes.io/projected/a91b5a18-2743-473f-8116-5fb1e348d05c-kube-api-access-fjlgh\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.848810 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8982t\" (UniqueName: \"kubernetes.io/projected/a4874120-574e-4f70-a7d9-5c6c91e41f41-kube-api-access-8982t\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.848893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-serving-cert\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.848301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2fcd86-878c-4bce-a720-460a61585e50-config\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.849500 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.849645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-dir\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.849743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-serving-ca\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.849828 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8pn5\" (UniqueName: \"kubernetes.io/projected/6670fa93-70e2-4047-b449-1bf939336210-kube-api-access-d8pn5\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.849909 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-client-ca\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.849995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0880ba0d-8774-4012-ae45-24997c78c5ca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850078 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wnk\" (UniqueName: \"kubernetes.io/projected/12eb0c01-c4f3-489f-87dd-bbc03f111814-kube-api-access-b4wnk\") pod \"dns-operator-744455d44c-4hjn2\" (UID: \"12eb0c01-c4f3-489f-87dd-bbc03f111814\") " pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850161 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-encryption-config\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850283 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-oauth-serving-cert\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850457 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec332d74-71c9-4401-8dfa-8674dc431b82-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850712 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-client-ca\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850764 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s6j6d"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850367 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-serving-ca\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.850037 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.851026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-serving-cert\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.851265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b207fdfd-306c-4494-8c1f-560dd155cd7a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.851328 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.851484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-etcd-client\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.851834 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.852912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-service-ca\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.852980 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrqxh"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.853383 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-oauth-serving-cert\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.853583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0880ba0d-8774-4012-ae45-24997c78c5ca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.854085 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-encryption-config\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.854090 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ksw9r"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.854647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95f962fb-c0fe-4583-8d7f-cac4f22110e9-serving-cert\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.855118 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.855358 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.855495 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec332d74-71c9-4401-8dfa-8674dc431b82-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.855293 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4874120-574e-4f70-a7d9-5c6c91e41f41-serving-cert\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.857053 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-serving-cert\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.857395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b2fcd86-878c-4bce-a720-460a61585e50-machine-approver-tls\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.857515 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-encryption-config\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.857847 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.859766 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.861658 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.863005 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-skdxp"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.864469 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bmqm4"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.866111 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.866783 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-oauth-config\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.867320 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zvpfm"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.868590 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.868796 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.869506 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.870625 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.871718 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.872860 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.874318 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8rqgh"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.874851 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.877121 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nmb9m"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.878580 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.879631 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.881015 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zvpfm"] Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.894735 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.915048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.944704 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.952308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.952373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27074e02-cda1-4d86-bef7-69aafc47ad94-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953326 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953353 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-policies\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953755 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953783 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjlgh\" (UniqueName: \"kubernetes.io/projected/a91b5a18-2743-473f-8116-5fb1e348d05c-kube-api-access-fjlgh\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953811 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-dir\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953833 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4wnk\" (UniqueName: \"kubernetes.io/projected/12eb0c01-c4f3-489f-87dd-bbc03f111814-kube-api-access-b4wnk\") pod \"dns-operator-744455d44c-4hjn2\" (UID: \"12eb0c01-c4f3-489f-87dd-bbc03f111814\") " pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953909 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.953959 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12eb0c01-c4f3-489f-87dd-bbc03f111814-metrics-tls\") pod \"dns-operator-744455d44c-4hjn2\" (UID: \"12eb0c01-c4f3-489f-87dd-bbc03f111814\") " pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.954074 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27074e02-cda1-4d86-bef7-69aafc47ad94-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.954096 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.954120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.954151 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27074e02-cda1-4d86-bef7-69aafc47ad94-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.954147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-dir\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.955535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.955548 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.955947 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.955950 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-policies\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.956265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.956483 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.956750 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.957350 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.958776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.960049 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12eb0c01-c4f3-489f-87dd-bbc03f111814-metrics-tls\") pod \"dns-operator-744455d44c-4hjn2\" (UID: \"12eb0c01-c4f3-489f-87dd-bbc03f111814\") " pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.960112 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.960644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.961722 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.961995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.975917 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 11:19:56 crc kubenswrapper[4867]: I0126 11:19:56.995280 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.016054 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.035196 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.055432 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.075558 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.095341 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.114901 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.135332 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.156013 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.174724 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.195610 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.215317 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.236676 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.249065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27074e02-cda1-4d86-bef7-69aafc47ad94-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.255756 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.276115 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.296087 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.306848 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27074e02-cda1-4d86-bef7-69aafc47ad94-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.315510 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.335342 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.356257 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.375298 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.395684 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.415094 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.436060 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.456250 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.476047 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.495995 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.514955 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.575156 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.595246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.614500 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.636976 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.655572 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.675409 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.695123 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.714513 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.735522 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.753627 4867 request.go:700] Waited for 1.009360699s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.755729 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.775095 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.794912 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.814573 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.835323 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.841310 4867 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.841434 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-serving-cert podName:6203c5b2-2d8f-46c5-a31c-59190d111d7d nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.341400906 +0000 UTC m=+148.039975816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-serving-cert") pod "apiserver-76f77b778f-jvs97" (UID: "6203c5b2-2d8f-46c5-a31c-59190d111d7d") : failed to sync secret cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.841666 4867 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.841783 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-config podName:6203c5b2-2d8f-46c5-a31c-59190d111d7d nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.341753167 +0000 UTC m=+148.040328117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-config") pod "apiserver-76f77b778f-jvs97" (UID: "6203c5b2-2d8f-46c5-a31c-59190d111d7d") : failed to sync configmap cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.843131 4867 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.843197 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-client podName:6203c5b2-2d8f-46c5-a31c-59190d111d7d nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.343184375 +0000 UTC m=+148.041759295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-client") pod "apiserver-76f77b778f-jvs97" (UID: "6203c5b2-2d8f-46c5-a31c-59190d111d7d") : failed to sync secret cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.843136 4867 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.843265 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert podName:6670fa93-70e2-4047-b449-1bf939336210 nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.343256517 +0000 UTC m=+148.041831447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert") pod "controller-manager-879f6c89f-9jc4l" (UID: "6670fa93-70e2-4047-b449-1bf939336210") : failed to sync secret cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.845366 4867 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: E0126 11:19:57.845443 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca podName:6670fa93-70e2-4047-b449-1bf939336210 nodeName:}" failed. No retries permitted until 2026-01-26 11:19:58.345429055 +0000 UTC m=+148.044004155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca") pod "controller-manager-879f6c89f-9jc4l" (UID: "6670fa93-70e2-4047-b449-1bf939336210") : failed to sync configmap cache: timed out waiting for the condition Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.854245 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.876890 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.895934 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.915924 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.944175 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.956371 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.980660 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 11:19:57 crc kubenswrapper[4867]: I0126 11:19:57.994536 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.014423 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.034691 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.053919 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.075045 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.095053 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.114802 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.135760 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.155262 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.169391 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:19:58 crc kubenswrapper[4867]: E0126 11:19:58.169704 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:22:00.169672944 +0000 UTC m=+269.868247864 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.169986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.170079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.171045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.173961 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.175003 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.194279 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.215264 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.234527 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.254776 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.271804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.275649 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.275676 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.282909 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.295970 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.296460 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.315760 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.336127 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.356028 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.372960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-config\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.373002 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.373031 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-client\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.373406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.373527 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.374070 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-serving-cert\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.376389 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.377648 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.395361 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.416855 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.460060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fmsr\" (UniqueName: \"kubernetes.io/projected/b207fdfd-306c-4494-8c1f-560dd155cd7a-kube-api-access-9fmsr\") pod \"machine-api-operator-5694c8668f-pb5rg\" (UID: \"b207fdfd-306c-4494-8c1f-560dd155cd7a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.471142 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcgm\" (UniqueName: \"kubernetes.io/projected/3b2fcd86-878c-4bce-a720-460a61585e50-kube-api-access-nbcgm\") pod \"machine-approver-56656f9798-ptqs7\" (UID: \"3b2fcd86-878c-4bce-a720-460a61585e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.491081 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjg96\" (UniqueName: \"kubernetes.io/projected/6203c5b2-2d8f-46c5-a31c-59190d111d7d-kube-api-access-xjg96\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:58 crc kubenswrapper[4867]: W0126 11:19:58.512141 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9fdac275d8720f3f1476f85ea8bb9fe156e8134629b4ae2eff377a8e0b0be1f2 WatchSource:0}: Error finding container 9fdac275d8720f3f1476f85ea8bb9fe156e8134629b4ae2eff377a8e0b0be1f2: Status 404 returned error can't find the container with id 9fdac275d8720f3f1476f85ea8bb9fe156e8134629b4ae2eff377a8e0b0be1f2 Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.514824 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpjh9\" (UniqueName: \"kubernetes.io/projected/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-kube-api-access-jpjh9\") pod \"route-controller-manager-6576b87f9c-m9tw6\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.515058 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.537997 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmnl6\" (UniqueName: \"kubernetes.io/projected/95f962fb-c0fe-4583-8d7f-cac4f22110e9-kube-api-access-xmnl6\") pod \"openshift-config-operator-7777fb866f-9r5x7\" (UID: \"95f962fb-c0fe-4583-8d7f-cac4f22110e9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.560489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0880ba0d-8774-4012-ae45-24997c78c5ca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.572908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94dvd\" (UniqueName: \"kubernetes.io/projected/ec332d74-71c9-4401-8dfa-8674dc431b82-kube-api-access-94dvd\") pod \"openshift-apiserver-operator-796bbdcf4f-vtd7b\" (UID: \"ec332d74-71c9-4401-8dfa-8674dc431b82\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.600466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6lpb\" (UniqueName: \"kubernetes.io/projected/a721247b-3436-4bb4-bc5c-ab4e94db0b41-kube-api-access-n6lpb\") pod \"console-f9d7485db-dc94j\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.608695 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.611909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kctg5\" (UniqueName: \"kubernetes.io/projected/0880ba0d-8774-4012-ae45-24997c78c5ca-kube-api-access-kctg5\") pod \"cluster-image-registry-operator-dc59b4c8b-xzrd8\" (UID: \"0880ba0d-8774-4012-ae45-24997c78c5ca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.615944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.634970 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8982t\" (UniqueName: \"kubernetes.io/projected/a4874120-574e-4f70-a7d9-5c6c91e41f41-kube-api-access-8982t\") pod \"apiserver-7bbb656c7d-nxdt6\" (UID: \"a4874120-574e-4f70-a7d9-5c6c91e41f41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.652697 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8pn5\" (UniqueName: \"kubernetes.io/projected/6670fa93-70e2-4047-b449-1bf939336210-kube-api-access-d8pn5\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.654915 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.664842 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.675455 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.695592 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.713500 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.715457 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.736088 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.741438 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.750296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.755015 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.774458 4867 request.go:700] Waited for 1.905273466s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.778138 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.779527 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.818204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27074e02-cda1-4d86-bef7-69aafc47ad94-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fg7cn\" (UID: \"27074e02-cda1-4d86-bef7-69aafc47ad94\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.846882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjlgh\" (UniqueName: \"kubernetes.io/projected/a91b5a18-2743-473f-8116-5fb1e348d05c-kube-api-access-fjlgh\") pod \"oauth-openshift-558db77b4-n9jb5\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.850937 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.857961 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.860010 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4wnk\" (UniqueName: \"kubernetes.io/projected/12eb0c01-c4f3-489f-87dd-bbc03f111814-kube-api-access-b4wnk\") pod \"dns-operator-744455d44c-4hjn2\" (UID: \"12eb0c01-c4f3-489f-87dd-bbc03f111814\") " pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.866056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.875423 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8"] Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.875836 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.884567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6203c5b2-2d8f-46c5-a31c-59190d111d7d-config\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.897583 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.908015 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-etcd-client\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.931950 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:19:58 crc kubenswrapper[4867]: W0126 11:19:58.949452 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a717d0bd9ac3e501a95812d4739adf3d3d730f84ad3a78559d2ab9c501b6f50f WatchSource:0}: Error finding container a717d0bd9ac3e501a95812d4739adf3d3d730f84ad3a78559d2ab9c501b6f50f: Status 404 returned error can't find the container with id a717d0bd9ac3e501a95812d4739adf3d3d730f84ad3a78559d2ab9c501b6f50f Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.957652 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.970698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert\") pod \"controller-manager-879f6c89f-9jc4l\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.982099 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.987333 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6203c5b2-2d8f-46c5-a31c-59190d111d7d-serving-cert\") pod \"apiserver-76f77b778f-jvs97\" (UID: \"6203c5b2-2d8f-46c5-a31c-59190d111d7d\") " pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989015 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-bound-sa-token\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989068 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jgn9\" (UniqueName: \"kubernetes.io/projected/24814471-72bd-4b41-9615-49f2f7115d9f-kube-api-access-8jgn9\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989099 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989289 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvsmh\" (UniqueName: \"kubernetes.io/projected/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-kube-api-access-bvsmh\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-config\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989707 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544gt\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-kube-api-access-544gt\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f064632-2f38-4059-b361-aa528f19ddeb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p7cdz\" (UID: \"4f064632-2f38-4059-b361-aa528f19ddeb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989808 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttbpn\" (UniqueName: \"kubernetes.io/projected/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-kube-api-access-ttbpn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989840 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/485f8610-85d7-44ef-9ed2-719f3d409a58-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989863 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/485f8610-85d7-44ef-9ed2-719f3d409a58-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-config\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989937 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.989961 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3f1f482-470d-4521-b4ab-76bdd0e795d0-trusted-ca\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea07ea8f-1510-4609-949b-83a3aed3ddee-service-ca-bundle\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990066 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjfb\" (UniqueName: \"kubernetes.io/projected/ea07ea8f-1510-4609-949b-83a3aed3ddee-kube-api-access-vtjfb\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-metrics-tls\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990190 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st62k\" (UniqueName: \"kubernetes.io/projected/f3f1f482-470d-4521-b4ab-76bdd0e795d0-kube-api-access-st62k\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990261 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-default-certificate\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5lr\" (UniqueName: \"kubernetes.io/projected/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-kube-api-access-2k5lr\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-metrics-certs\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990463 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-config\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990767 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttbrz\" (UniqueName: \"kubernetes.io/projected/4f064632-2f38-4059-b361-aa528f19ddeb-kube-api-access-ttbrz\") pod \"cluster-samples-operator-665b6dd947-p7cdz\" (UID: \"4f064632-2f38-4059-b361-aa528f19ddeb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:58 crc kubenswrapper[4867]: E0126 11:19:58.990796 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:19:59.490780773 +0000 UTC m=+149.189355683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-ca\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990910 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3f1f482-470d-4521-b4ab-76bdd0e795d0-serving-cert\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.990995 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991019 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24814471-72bd-4b41-9615-49f2f7115d9f-serving-cert\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991302 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991342 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-trusted-ca\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991369 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-service-ca-bundle\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f1f482-470d-4521-b4ab-76bdd0e795d0-config\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-tls\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3348ed5-3007-4ff3-b77d-ecb758f238df-ca-trust-extracted\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991598 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-client\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991626 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-serving-cert\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbjg\" (UniqueName: \"kubernetes.io/projected/6a27bc25-3df1-4dd2-a51d-de8e2bb5070e-kube-api-access-vfbjg\") pod \"downloads-7954f5f757-ltvwb\" (UID: \"6a27bc25-3df1-4dd2-a51d-de8e2bb5070e\") " pod="openshift-console/downloads-7954f5f757-ltvwb" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-certificates\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3348ed5-3007-4ff3-b77d-ecb758f238df-installation-pull-secrets\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991754 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-kube-api-access-fr8xh\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-stats-auth\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.991908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-service-ca\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.992342 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-trusted-ca\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.992402 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:58 crc kubenswrapper[4867]: I0126 11:19:58.992475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485f8610-85d7-44ef-9ed2-719f3d409a58-config\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.000619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.041743 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.052342 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pb5rg"] Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093174 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttbpn\" (UniqueName: \"kubernetes.io/projected/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-kube-api-access-ttbpn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/485f8610-85d7-44ef-9ed2-719f3d409a58-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093452 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/485f8610-85d7-44ef-9ed2-719f3d409a58-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/afe23588-98b0-47bf-9092-849d7e2e5f98-proxy-tls\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093499 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b27feea-5afc-4d14-969e-cbbe2047025e-profile-collector-cert\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093537 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53dbe0f8-7ef4-4a92-b25a-3d052c747202-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093557 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-config\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093587 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3f1f482-470d-4521-b4ab-76bdd0e795d0-trusted-ca\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjfb\" (UniqueName: \"kubernetes.io/projected/ea07ea8f-1510-4609-949b-83a3aed3ddee-kube-api-access-vtjfb\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7998be-4b54-46dc-9791-045c502be976-config\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093687 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea07ea8f-1510-4609-949b-83a3aed3ddee-service-ca-bundle\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093706 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-metrics-tls\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093727 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4t4v\" (UniqueName: \"kubernetes.io/projected/69a2cb6a-ca89-45ea-a985-ce216707b50e-kube-api-access-l4t4v\") pod \"migrator-59844c95c7-lg6cl\" (UID: \"69a2cb6a-ca89-45ea-a985-ce216707b50e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093754 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c6d7e3-5fb6-4242-b616-2628ca519c8e-secret-volume\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st62k\" (UniqueName: \"kubernetes.io/projected/f3f1f482-470d-4521-b4ab-76bdd0e795d0-kube-api-access-st62k\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093791 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-csi-data-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093818 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-default-certificate\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093835 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5lr\" (UniqueName: \"kubernetes.io/projected/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-kube-api-access-2k5lr\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093854 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-metrics-certs\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093870 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhd6b\" (UniqueName: \"kubernetes.io/projected/f7d11034-ad81-48b6-bf3b-8597910b1adf-kube-api-access-bhd6b\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093906 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093923 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-tmpfs\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/60781074-ebcf-45cb-9a12-9193995071c1-certs\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afe23588-98b0-47bf-9092-849d7e2e5f98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.093993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqblx\" (UniqueName: \"kubernetes.io/projected/afe23588-98b0-47bf-9092-849d7e2e5f98-kube-api-access-rqblx\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094009 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c800e71-0744-45b6-9c5a-f0c3dd9e6adc-cert\") pod \"ingress-canary-nmb9m\" (UID: \"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc\") " pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094037 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-config\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094053 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b27feea-5afc-4d14-969e-cbbe2047025e-srv-cert\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094085 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c7998be-4b54-46dc-9791-045c502be976-serving-cert\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094101 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfmv\" (UniqueName: \"kubernetes.io/projected/4c7998be-4b54-46dc-9791-045c502be976-kube-api-access-4cfmv\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttbrz\" (UniqueName: \"kubernetes.io/projected/4f064632-2f38-4059-b361-aa528f19ddeb-kube-api-access-ttbrz\") pod \"cluster-samples-operator-665b6dd947-p7cdz\" (UID: \"4f064632-2f38-4059-b361-aa528f19ddeb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-srv-cert\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094156 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-ca\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094179 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-metrics-tls\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094248 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87b94\" (UniqueName: \"kubernetes.io/projected/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-kube-api-access-87b94\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqs98\" (UniqueName: \"kubernetes.io/projected/f5bb657a-0790-4c81-b7bd-861e297bbaeb-kube-api-access-mqs98\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094295 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3f1f482-470d-4521-b4ab-76bdd0e795d0-serving-cert\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094320 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f5bb657a-0790-4c81-b7bd-861e297bbaeb-signing-key\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094339 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24814471-72bd-4b41-9615-49f2f7115d9f-serving-cert\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094374 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-config-volume\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094392 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-trusted-ca\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094408 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094436 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-service-ca-bundle\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094452 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzbp\" (UniqueName: \"kubernetes.io/projected/8b27feea-5afc-4d14-969e-cbbe2047025e-kube-api-access-jxzbp\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094468 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53c0e980-1e5a-44d8-a5fd-d29fd63cfce7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bs62h\" (UID: \"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f1f482-470d-4521-b4ab-76bdd0e795d0-config\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094505 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-apiservice-cert\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094567 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-kube-api-access-hm6hf\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8l4\" (UniqueName: \"kubernetes.io/projected/53dbe0f8-7ef4-4a92-b25a-3d052c747202-kube-api-access-dn8l4\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58prs\" (UniqueName: \"kubernetes.io/projected/ca11705a-ad86-4b81-87b6-fba88013e723-kube-api-access-58prs\") pod \"multus-admission-controller-857f4d67dd-s6j6d\" (UID: \"ca11705a-ad86-4b81-87b6-fba88013e723\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094617 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdff2\" (UniqueName: \"kubernetes.io/projected/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-kube-api-access-mdff2\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-webhook-cert\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094655 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-tls\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094672 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-plugins-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094709 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3348ed5-3007-4ff3-b77d-ecb758f238df-ca-trust-extracted\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094726 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-client\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.094796 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:19:59.594763473 +0000 UTC m=+149.293338383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-socket-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094928 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/702e97d5-258a-4ec8-bc8f-cc700c16f813-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6vjzt\" (UID: \"702e97d5-258a-4ec8-bc8f-cc700c16f813\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-serving-cert\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.094986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53dbe0f8-7ef4-4a92-b25a-3d052c747202-proxy-tls\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095013 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbjg\" (UniqueName: \"kubernetes.io/projected/6a27bc25-3df1-4dd2-a51d-de8e2bb5070e-kube-api-access-vfbjg\") pod \"downloads-7954f5f757-ltvwb\" (UID: \"6a27bc25-3df1-4dd2-a51d-de8e2bb5070e\") " pod="openshift-console/downloads-7954f5f757-ltvwb" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095036 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f5bb657a-0790-4c81-b7bd-861e297bbaeb-signing-cabundle\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095059 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-certificates\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095131 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3348ed5-3007-4ff3-b77d-ecb758f238df-installation-pull-secrets\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095166 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/60781074-ebcf-45cb-9a12-9193995071c1-node-bootstrap-token\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-kube-api-access-fr8xh\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095283 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-stats-auth\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-service-ca\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095347 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwsdb\" (UniqueName: \"kubernetes.io/projected/53c0e980-1e5a-44d8-a5fd-d29fd63cfce7-kube-api-access-mwsdb\") pod \"package-server-manager-789f6589d5-bs62h\" (UID: \"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095384 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-trusted-ca\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095405 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-mountpoint-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095437 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frfw\" (UniqueName: \"kubernetes.io/projected/0c800e71-0744-45b6-9c5a-f0c3dd9e6adc-kube-api-access-4frfw\") pod \"ingress-canary-nmb9m\" (UID: \"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc\") " pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095459 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095479 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095496 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53dbe0f8-7ef4-4a92-b25a-3d052c747202-images\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095541 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485f8610-85d7-44ef-9ed2-719f3d409a58-config\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095559 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkmm\" (UniqueName: \"kubernetes.io/projected/60781074-ebcf-45cb-9a12-9193995071c1-kube-api-access-ghkmm\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-bound-sa-token\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca11705a-ad86-4b81-87b6-fba88013e723-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s6j6d\" (UID: \"ca11705a-ad86-4b81-87b6-fba88013e723\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.095639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lckxk\" (UniqueName: \"kubernetes.io/projected/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-kube-api-access-lckxk\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.096507 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-config\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.096833 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-ca\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.097928 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f1f482-470d-4521-b4ab-76bdd0e795d0-config\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.098133 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.098662 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea07ea8f-1510-4609-949b-83a3aed3ddee-service-ca-bundle\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.098988 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099508 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c6d7e3-5fb6-4242-b616-2628ca519c8e-config-volume\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099554 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jgn9\" (UniqueName: \"kubernetes.io/projected/24814471-72bd-4b41-9615-49f2f7115d9f-kube-api-access-8jgn9\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-config\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099870 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-registration-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099912 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099947 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvsmh\" (UniqueName: \"kubernetes.io/projected/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-kube-api-access-bvsmh\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.099972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-config\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.100458 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-trusted-ca\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.102361 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485f8610-85d7-44ef-9ed2-719f3d409a58-config\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.103776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3f1f482-470d-4521-b4ab-76bdd0e795d0-trusted-ca\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.104936 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-client\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.106005 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-certificates\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.106668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-metrics-certs\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.106991 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/24814471-72bd-4b41-9615-49f2f7115d9f-etcd-service-ca\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.107123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3348ed5-3007-4ff3-b77d-ecb758f238df-ca-trust-extracted\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.107363 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqcvq\" (UniqueName: \"kubernetes.io/projected/702e97d5-258a-4ec8-bc8f-cc700c16f813-kube-api-access-cqcvq\") pod \"control-plane-machine-set-operator-78cbb6b69f-6vjzt\" (UID: \"702e97d5-258a-4ec8-bc8f-cc700c16f813\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.107406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-544gt\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-kube-api-access-544gt\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.107435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f064632-2f38-4059-b361-aa528f19ddeb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p7cdz\" (UID: \"4f064632-2f38-4059-b361-aa528f19ddeb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.107472 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4bp2\" (UniqueName: \"kubernetes.io/projected/64c6d7e3-5fb6-4242-b616-2628ca519c8e-kube-api-access-p4bp2\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.108848 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-service-ca-bundle\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.109433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24814471-72bd-4b41-9615-49f2f7115d9f-serving-cert\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.111121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-serving-cert\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.128599 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-trusted-ca\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.129066 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.129095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-config\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.129368 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/485f8610-85d7-44ef-9ed2-719f3d409a58-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.129974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-metrics-tls\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.130003 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-default-certificate\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.131933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b"] Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.132125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.137268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3f1f482-470d-4521-b4ab-76bdd0e795d0-serving-cert\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.137834 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-tls\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.143842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3348ed5-3007-4ff3-b77d-ecb758f238df-installation-pull-secrets\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.145698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.149812 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ea07ea8f-1510-4609-949b-83a3aed3ddee-stats-auth\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.150412 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f064632-2f38-4059-b361-aa528f19ddeb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p7cdz\" (UID: \"4f064632-2f38-4059-b361-aa528f19ddeb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.152567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttbrz\" (UniqueName: \"kubernetes.io/projected/4f064632-2f38-4059-b361-aa528f19ddeb-kube-api-access-ttbrz\") pod \"cluster-samples-operator-665b6dd947-p7cdz\" (UID: \"4f064632-2f38-4059-b361-aa528f19ddeb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.162450 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5lr\" (UniqueName: \"kubernetes.io/projected/50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4-kube-api-access-2k5lr\") pod \"authentication-operator-69f744f599-bmqm4\" (UID: \"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.182448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttbpn\" (UniqueName: \"kubernetes.io/projected/81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e-kube-api-access-ttbpn\") pod \"openshift-controller-manager-operator-756b6f6bc6-g6hqf\" (UID: \"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.186499 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209338 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b27feea-5afc-4d14-969e-cbbe2047025e-srv-cert\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209426 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-srv-cert\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c7998be-4b54-46dc-9791-045c502be976-serving-cert\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfmv\" (UniqueName: \"kubernetes.io/projected/4c7998be-4b54-46dc-9791-045c502be976-kube-api-access-4cfmv\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209485 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-metrics-tls\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209503 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqs98\" (UniqueName: \"kubernetes.io/projected/f5bb657a-0790-4c81-b7bd-861e297bbaeb-kube-api-access-mqs98\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87b94\" (UniqueName: \"kubernetes.io/projected/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-kube-api-access-87b94\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f5bb657a-0790-4c81-b7bd-861e297bbaeb-signing-key\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-config-volume\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzbp\" (UniqueName: \"kubernetes.io/projected/8b27feea-5afc-4d14-969e-cbbe2047025e-kube-api-access-jxzbp\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209605 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53c0e980-1e5a-44d8-a5fd-d29fd63cfce7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bs62h\" (UID: \"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-apiservice-cert\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8l4\" (UniqueName: \"kubernetes.io/projected/53dbe0f8-7ef4-4a92-b25a-3d052c747202-kube-api-access-dn8l4\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209702 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-kube-api-access-hm6hf\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58prs\" (UniqueName: \"kubernetes.io/projected/ca11705a-ad86-4b81-87b6-fba88013e723-kube-api-access-58prs\") pod \"multus-admission-controller-857f4d67dd-s6j6d\" (UID: \"ca11705a-ad86-4b81-87b6-fba88013e723\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209734 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdff2\" (UniqueName: \"kubernetes.io/projected/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-kube-api-access-mdff2\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-webhook-cert\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-plugins-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-socket-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/702e97d5-258a-4ec8-bc8f-cc700c16f813-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6vjzt\" (UID: \"702e97d5-258a-4ec8-bc8f-cc700c16f813\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53dbe0f8-7ef4-4a92-b25a-3d052c747202-proxy-tls\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f5bb657a-0790-4c81-b7bd-861e297bbaeb-signing-cabundle\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209892 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/60781074-ebcf-45cb-9a12-9193995071c1-node-bootstrap-token\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwsdb\" (UniqueName: \"kubernetes.io/projected/53c0e980-1e5a-44d8-a5fd-d29fd63cfce7-kube-api-access-mwsdb\") pod \"package-server-manager-789f6589d5-bs62h\" (UID: \"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209938 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-mountpoint-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frfw\" (UniqueName: \"kubernetes.io/projected/0c800e71-0744-45b6-9c5a-f0c3dd9e6adc-kube-api-access-4frfw\") pod \"ingress-canary-nmb9m\" (UID: \"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc\") " pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.209975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53dbe0f8-7ef4-4a92-b25a-3d052c747202-images\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.210016 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.210034 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghkmm\" (UniqueName: \"kubernetes.io/projected/60781074-ebcf-45cb-9a12-9193995071c1-kube-api-access-ghkmm\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.210059 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lckxk\" (UniqueName: \"kubernetes.io/projected/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-kube-api-access-lckxk\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.210699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c6d7e3-5fb6-4242-b616-2628ca519c8e-config-volume\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.210725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca11705a-ad86-4b81-87b6-fba88013e723-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s6j6d\" (UID: \"ca11705a-ad86-4b81-87b6-fba88013e723\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.212770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-registration-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.210751 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-registration-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.212915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqcvq\" (UniqueName: \"kubernetes.io/projected/702e97d5-258a-4ec8-bc8f-cc700c16f813-kube-api-access-cqcvq\") pod \"control-plane-machine-set-operator-78cbb6b69f-6vjzt\" (UID: \"702e97d5-258a-4ec8-bc8f-cc700c16f813\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.212949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4bp2\" (UniqueName: \"kubernetes.io/projected/64c6d7e3-5fb6-4242-b616-2628ca519c8e-kube-api-access-p4bp2\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.212991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/afe23588-98b0-47bf-9092-849d7e2e5f98-proxy-tls\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213010 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b27feea-5afc-4d14-969e-cbbe2047025e-profile-collector-cert\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213039 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53dbe0f8-7ef4-4a92-b25a-3d052c747202-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7998be-4b54-46dc-9791-045c502be976-config\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213138 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c6d7e3-5fb6-4242-b616-2628ca519c8e-secret-volume\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4t4v\" (UniqueName: \"kubernetes.io/projected/69a2cb6a-ca89-45ea-a985-ce216707b50e-kube-api-access-l4t4v\") pod \"migrator-59844c95c7-lg6cl\" (UID: \"69a2cb6a-ca89-45ea-a985-ce216707b50e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213189 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-csi-data-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213254 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhd6b\" (UniqueName: \"kubernetes.io/projected/f7d11034-ad81-48b6-bf3b-8597910b1adf-kube-api-access-bhd6b\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213283 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-tmpfs\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213326 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/60781074-ebcf-45cb-9a12-9193995071c1-certs\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afe23588-98b0-47bf-9092-849d7e2e5f98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213385 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqblx\" (UniqueName: \"kubernetes.io/projected/afe23588-98b0-47bf-9092-849d7e2e5f98-kube-api-access-rqblx\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.213408 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c800e71-0744-45b6-9c5a-f0c3dd9e6adc-cert\") pod \"ingress-canary-nmb9m\" (UID: \"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc\") " pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.215904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-csi-data-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.218116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53dbe0f8-7ef4-4a92-b25a-3d052c747202-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.220381 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-plugins-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.220439 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-socket-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.221016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-config-volume\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.222727 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c6d7e3-5fb6-4242-b616-2628ca519c8e-config-volume\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.223781 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.224868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7998be-4b54-46dc-9791-045c502be976-config\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.231734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afe23588-98b0-47bf-9092-849d7e2e5f98-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.232479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-tmpfs\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.233614 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.235582 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c800e71-0744-45b6-9c5a-f0c3dd9e6adc-cert\") pod \"ingress-canary-nmb9m\" (UID: \"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc\") " pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.235972 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:19:59.73594825 +0000 UTC m=+149.434523160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.236732 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7d11034-ad81-48b6-bf3b-8597910b1adf-mountpoint-dir\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.236895 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53dbe0f8-7ef4-4a92-b25a-3d052c747202-images\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.237546 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.238150 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.239967 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f5bb657a-0790-4c81-b7bd-861e297bbaeb-signing-cabundle\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.242666 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/702e97d5-258a-4ec8-bc8f-cc700c16f813-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6vjzt\" (UID: \"702e97d5-258a-4ec8-bc8f-cc700c16f813\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.243274 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-apiservice-cert\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.247148 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-srv-cert\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.248597 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca11705a-ad86-4b81-87b6-fba88013e723-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s6j6d\" (UID: \"ca11705a-ad86-4b81-87b6-fba88013e723\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.249083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/485f8610-85d7-44ef-9ed2-719f3d409a58-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bg5n8\" (UID: \"485f8610-85d7-44ef-9ed2-719f3d409a58\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.249489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/60781074-ebcf-45cb-9a12-9193995071c1-node-bootstrap-token\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.250128 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-webhook-cert\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.250115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f5bb657a-0790-4c81-b7bd-861e297bbaeb-signing-key\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.250857 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/afe23588-98b0-47bf-9092-849d7e2e5f98-proxy-tls\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.252834 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c6d7e3-5fb6-4242-b616-2628ca519c8e-secret-volume\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.253480 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b27feea-5afc-4d14-969e-cbbe2047025e-profile-collector-cert\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.254237 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-metrics-tls\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.254633 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53c0e980-1e5a-44d8-a5fd-d29fd63cfce7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bs62h\" (UID: \"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.255206 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.257810 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b27feea-5afc-4d14-969e-cbbe2047025e-srv-cert\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.259073 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c7998be-4b54-46dc-9791-045c502be976-serving-cert\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.259840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.262038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st62k\" (UniqueName: \"kubernetes.io/projected/f3f1f482-470d-4521-b4ab-76bdd0e795d0-kube-api-access-st62k\") pod \"console-operator-58897d9998-jczt5\" (UID: \"f3f1f482-470d-4521-b4ab-76bdd0e795d0\") " pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.265614 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jgn9\" (UniqueName: \"kubernetes.io/projected/24814471-72bd-4b41-9615-49f2f7115d9f-kube-api-access-8jgn9\") pod \"etcd-operator-b45778765-gn8gp\" (UID: \"24814471-72bd-4b41-9615-49f2f7115d9f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.275576 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/60781074-ebcf-45cb-9a12-9193995071c1-certs\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.275599 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53dbe0f8-7ef4-4a92-b25a-3d052c747202-proxy-tls\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.287526 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4a78b64-f941-4ffa-bb41-2b035c2fbdee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m22zg\" (UID: \"c4a78b64-f941-4ffa-bb41-2b035c2fbdee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.291598 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.294246 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.307567 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.314117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.314642 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:19:59.814625241 +0000 UTC m=+149.513200151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.323697 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.330819 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.331324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-bound-sa-token\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.343814 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjfb\" (UniqueName: \"kubernetes.io/projected/ea07ea8f-1510-4609-949b-83a3aed3ddee-kube-api-access-vtjfb\") pod \"router-default-5444994796-dmt7q\" (UID: \"ea07ea8f-1510-4609-949b-83a3aed3ddee\") " pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.372446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbjg\" (UniqueName: \"kubernetes.io/projected/6a27bc25-3df1-4dd2-a51d-de8e2bb5070e-kube-api-access-vfbjg\") pod \"downloads-7954f5f757-ltvwb\" (UID: \"6a27bc25-3df1-4dd2-a51d-de8e2bb5070e\") " pod="openshift-console/downloads-7954f5f757-ltvwb" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.388644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8xh\" (UniqueName: \"kubernetes.io/projected/b0ef109c-57cb-46c0-958b-fe33b8cdae0b-kube-api-access-fr8xh\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bj4j\" (UID: \"b0ef109c-57cb-46c0-958b-fe33b8cdae0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.389883 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.390586 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7"] Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.395023 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6"] Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.414372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-544gt\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-kube-api-access-544gt\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.416353 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.416947 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:19:59.916924816 +0000 UTC m=+149.615499726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.417168 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvsmh\" (UniqueName: \"kubernetes.io/projected/2131da56-d7d3-4b0d-b134-86c8dbcad2a6-kube-api-access-bvsmh\") pod \"ingress-operator-5b745b69d9-pgkmf\" (UID: \"2131da56-d7d3-4b0d-b134-86c8dbcad2a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.447278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a717d0bd9ac3e501a95812d4739adf3d3d730f84ad3a78559d2ab9c501b6f50f"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.459203 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8143469ba817379ffb6a223751eb81c26b705aab0e638eba76812bbe49a49a3b"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.459297 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9fdac275d8720f3f1476f85ea8bb9fe156e8134629b4ae2eff377a8e0b0be1f2"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.463991 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.486207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea85058950f25686c7f3cc08ec8d70bc9153faec10f77ccd360c1859fd7a6b80"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.486299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7ced96fabd0bf4e37bb7b1fd31d959868a2d4bcd0140b0f41f57d71d1cd740ea"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.487660 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwsdb\" (UniqueName: \"kubernetes.io/projected/53c0e980-1e5a-44d8-a5fd-d29fd63cfce7-kube-api-access-mwsdb\") pod \"package-server-manager-789f6589d5-bs62h\" (UID: \"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.487975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58prs\" (UniqueName: \"kubernetes.io/projected/ca11705a-ad86-4b81-87b6-fba88013e723-kube-api-access-58prs\") pod \"multus-admission-controller-857f4d67dd-s6j6d\" (UID: \"ca11705a-ad86-4b81-87b6-fba88013e723\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.490154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" event={"ID":"0880ba0d-8774-4012-ae45-24997c78c5ca","Type":"ContainerStarted","Data":"1ec804c70af6f23ea07ad302e30c3d5c43d11f2fa014c55a25fecb56d7bcbdf7"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.490240 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" event={"ID":"0880ba0d-8774-4012-ae45-24997c78c5ca","Type":"ContainerStarted","Data":"3d51d1f068af3d5fe5626e1bb88f40fc07ab51105ea290541107ba685a0d99bb"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.496173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" event={"ID":"3b2fcd86-878c-4bce-a720-460a61585e50","Type":"ContainerStarted","Data":"ec841b92f479c20b4fab72f9e722410e904e2924c42502acf6cf4582acfd372c"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.496264 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" event={"ID":"3b2fcd86-878c-4bce-a720-460a61585e50","Type":"ContainerStarted","Data":"da5d731219260b87f004ceec6da8ba23a082231f14b707a2b382239cfa59539d"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.496276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" event={"ID":"3b2fcd86-878c-4bce-a720-460a61585e50","Type":"ContainerStarted","Data":"ba524b9a27e7dd6888bab20bc6bbaa9155962d5c3e8b862a60247e1a1fd8cda5"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.496433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqcvq\" (UniqueName: \"kubernetes.io/projected/702e97d5-258a-4ec8-bc8f-cc700c16f813-kube-api-access-cqcvq\") pod \"control-plane-machine-set-operator-78cbb6b69f-6vjzt\" (UID: \"702e97d5-258a-4ec8-bc8f-cc700c16f813\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.498548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" event={"ID":"b207fdfd-306c-4494-8c1f-560dd155cd7a","Type":"ContainerStarted","Data":"7b326d6e7c21a4db336d4707179bfe63d8415221c9525ae38bb044dd211c9507"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.505213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" event={"ID":"ec332d74-71c9-4401-8dfa-8674dc431b82","Type":"ContainerStarted","Data":"bc0fba31e2ff789bab15cb418f29331ff35aa6fcb05059fcf5b73f1790b9b26a"} Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.517627 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.518395 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.018367907 +0000 UTC m=+149.716942817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.520113 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6"] Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.522124 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdff2\" (UniqueName: \"kubernetes.io/projected/0b3d9b9e-a34f-417b-9b20-8b3565e7da51-kube-api-access-mdff2\") pod \"packageserver-d55dfcdfc-rztlw\" (UID: \"0b3d9b9e-a34f-417b-9b20-8b3565e7da51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.525172 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.542470 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4bp2\" (UniqueName: \"kubernetes.io/projected/64c6d7e3-5fb6-4242-b616-2628ca519c8e-kube-api-access-p4bp2\") pod \"collect-profiles-29490435-gd8xn\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.550317 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n9jb5"] Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.550710 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ltvwb" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.553004 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dc94j"] Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.572777 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghkmm\" (UniqueName: \"kubernetes.io/projected/60781074-ebcf-45cb-9a12-9193995071c1-kube-api-access-ghkmm\") pod \"machine-config-server-qhjqn\" (UID: \"60781074-ebcf-45cb-9a12-9193995071c1\") " pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.578141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lckxk\" (UniqueName: \"kubernetes.io/projected/4f71f9b3-6264-4e4b-876d-bf61a930a9e5-kube-api-access-lckxk\") pod \"dns-default-8rqgh\" (UID: \"4f71f9b3-6264-4e4b-876d-bf61a930a9e5\") " pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.586712 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" Jan 26 11:19:59 crc kubenswrapper[4867]: W0126 11:19:59.595715 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda721247b_3436_4bb4_bc5c_ab4e94db0b41.slice/crio-673ab4295917e01a19206667bf9dd0ba6fdff1e07bf922b7ed9174b5086d078d WatchSource:0}: Error finding container 673ab4295917e01a19206667bf9dd0ba6fdff1e07bf922b7ed9174b5086d078d: Status 404 returned error can't find the container with id 673ab4295917e01a19206667bf9dd0ba6fdff1e07bf922b7ed9174b5086d078d Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.605492 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzbp\" (UniqueName: \"kubernetes.io/projected/8b27feea-5afc-4d14-969e-cbbe2047025e-kube-api-access-jxzbp\") pod \"catalog-operator-68c6474976-b2qk2\" (UID: \"8b27feea-5afc-4d14-969e-cbbe2047025e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.619822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.621698 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.121674049 +0000 UTC m=+149.820249149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.627453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4t4v\" (UniqueName: \"kubernetes.io/projected/69a2cb6a-ca89-45ea-a985-ce216707b50e-kube-api-access-l4t4v\") pod \"migrator-59844c95c7-lg6cl\" (UID: \"69a2cb6a-ca89-45ea-a985-ce216707b50e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.639173 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhd6b\" (UniqueName: \"kubernetes.io/projected/f7d11034-ad81-48b6-bf3b-8597910b1adf-kube-api-access-bhd6b\") pod \"csi-hostpathplugin-zvpfm\" (UID: \"f7d11034-ad81-48b6-bf3b-8597910b1adf\") " pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.658667 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" Jan 26 11:19:59 crc kubenswrapper[4867]: W0126 11:19:59.680776 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4874120_574e_4f70_a7d9_5c6c91e41f41.slice/crio-2b9cc97c2ede7b5148de359bcc71682239f929af6dd39cbdc16bc0684cad41c0 WatchSource:0}: Error finding container 2b9cc97c2ede7b5148de359bcc71682239f929af6dd39cbdc16bc0684cad41c0: Status 404 returned error can't find the container with id 2b9cc97c2ede7b5148de359bcc71682239f929af6dd39cbdc16bc0684cad41c0 Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.684094 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfmv\" (UniqueName: \"kubernetes.io/projected/4c7998be-4b54-46dc-9791-045c502be976-kube-api-access-4cfmv\") pod \"service-ca-operator-777779d784-f7zk4\" (UID: \"4c7998be-4b54-46dc-9791-045c502be976\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.685146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8l4\" (UniqueName: \"kubernetes.io/projected/53dbe0f8-7ef4-4a92-b25a-3d052c747202-kube-api-access-dn8l4\") pod \"machine-config-operator-74547568cd-lrrdf\" (UID: \"53dbe0f8-7ef4-4a92-b25a-3d052c747202\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.704361 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frfw\" (UniqueName: \"kubernetes.io/projected/0c800e71-0744-45b6-9c5a-f0c3dd9e6adc-kube-api-access-4frfw\") pod \"ingress-canary-nmb9m\" (UID: \"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc\") " pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.707626 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.717467 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.720750 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87b94\" (UniqueName: \"kubernetes.io/projected/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-kube-api-access-87b94\") pod \"marketplace-operator-79b997595-hrqxh\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.720939 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.721131 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.221105666 +0000 UTC m=+149.919680576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.721610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.722343 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.222325349 +0000 UTC m=+149.920900259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.737956 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.740403 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqblx\" (UniqueName: \"kubernetes.io/projected/afe23588-98b0-47bf-9092-849d7e2e5f98-kube-api-access-rqblx\") pod \"machine-config-controller-84d6567774-l4cqk\" (UID: \"afe23588-98b0-47bf-9092-849d7e2e5f98\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.745186 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.752661 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.761285 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.763517 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm6hf\" (UniqueName: \"kubernetes.io/projected/3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0-kube-api-access-hm6hf\") pod \"olm-operator-6b444d44fb-xs9ns\" (UID: \"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.770629 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.778536 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.783396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqs98\" (UniqueName: \"kubernetes.io/projected/f5bb657a-0790-4c81-b7bd-861e297bbaeb-kube-api-access-mqs98\") pod \"service-ca-9c57cc56f-ksw9r\" (UID: \"f5bb657a-0790-4c81-b7bd-861e297bbaeb\") " pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.787817 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.798425 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.811903 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qhjqn" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.821207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8rqgh" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.823572 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.823928 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.323907354 +0000 UTC m=+150.022482254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.830340 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nmb9m" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.858709 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" Jan 26 11:19:59 crc kubenswrapper[4867]: I0126 11:19:59.929553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:19:59 crc kubenswrapper[4867]: E0126 11:19:59.930767 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.430750321 +0000 UTC m=+150.129325231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.000704 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.022524 4867 csr.go:261] certificate signing request csr-62xzc is approved, waiting to be issued Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.026707 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.034148 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.034764 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.53473448 +0000 UTC m=+150.233309390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.038733 4867 csr.go:257] certificate signing request csr-62xzc is issued Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.146415 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.146790 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.646772996 +0000 UTC m=+150.345347906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.165449 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4hjn2"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.251896 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.252308 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.752258276 +0000 UTC m=+150.450833186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.253544 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.253979 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.753960202 +0000 UTC m=+150.452535112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.355072 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.355609 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.855585398 +0000 UTC m=+150.554160308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.460194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.462125 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:00.962105186 +0000 UTC m=+150.660680106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.554387 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d28a0e9ebb584f37de79e7b55fd1a704aa5a2c0dff1297be91c42c5508e238a8"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.554452 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.562303 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.562776 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.062741286 +0000 UTC m=+150.761316316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.610657 4867 generic.go:334] "Generic (PLEG): container finished" podID="95f962fb-c0fe-4583-8d7f-cac4f22110e9" containerID="eaa83f31cc8dfeb865034c537539bc7e3409fcf5684133fe75306693eec8669f" exitCode=0 Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" event={"ID":"b207fdfd-306c-4494-8c1f-560dd155cd7a","Type":"ContainerStarted","Data":"153bd3fd9a3634d08d0806af467b88894b0cbd7426ea7d8ca81314b390c49f99"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615863 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" event={"ID":"b207fdfd-306c-4494-8c1f-560dd155cd7a","Type":"ContainerStarted","Data":"fd09367d5fc2db64e9111c073370d096c7b9c36d9a1a1f09b258038d8399c0eb"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615895 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dc94j" event={"ID":"a721247b-3436-4bb4-bc5c-ab4e94db0b41","Type":"ContainerStarted","Data":"673ab4295917e01a19206667bf9dd0ba6fdff1e07bf922b7ed9174b5086d078d"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" event={"ID":"a4874120-574e-4f70-a7d9-5c6c91e41f41","Type":"ContainerStarted","Data":"2b9cc97c2ede7b5148de359bcc71682239f929af6dd39cbdc16bc0684cad41c0"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" event={"ID":"a91b5a18-2743-473f-8116-5fb1e348d05c","Type":"ContainerStarted","Data":"c18ae9af67d0cc42c62ef574e6a2e36b13d8eb0f61cdd8bdda55e082663a33a4"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615952 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" event={"ID":"64cfae17-8e43-4fd9-8f7c-2f4996b6351c","Type":"ContainerStarted","Data":"55a0a6bf58bee853be5883f07a341ad31aa1a6badbcbf82f853130372ec1794a"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.615963 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" event={"ID":"64cfae17-8e43-4fd9-8f7c-2f4996b6351c","Type":"ContainerStarted","Data":"293bd1fc7a7e05043daefa98e31930f31b4a55e8d602108287a79654c0f3f34d"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.617969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" event={"ID":"ec332d74-71c9-4401-8dfa-8674dc431b82","Type":"ContainerStarted","Data":"c94ffc04b87e93264989ee430615395a9469cf580424a372bb29a82f68fc5192"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.618003 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dmt7q" event={"ID":"ea07ea8f-1510-4609-949b-83a3aed3ddee","Type":"ContainerStarted","Data":"cc625510be4dcddbc0f14ea86ff6a64256f365517fd86f79a9234cea459c499e"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.618022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dmt7q" event={"ID":"ea07ea8f-1510-4609-949b-83a3aed3ddee","Type":"ContainerStarted","Data":"a8bc6dba6d934a5f1425e4a82f5a030ed9f5ee7a9f56d9285ec29d5814a5d642"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.618035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" event={"ID":"95f962fb-c0fe-4583-8d7f-cac4f22110e9","Type":"ContainerDied","Data":"eaa83f31cc8dfeb865034c537539bc7e3409fcf5684133fe75306693eec8669f"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.618049 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" event={"ID":"95f962fb-c0fe-4583-8d7f-cac4f22110e9","Type":"ContainerStarted","Data":"b5f6cae95a0cd7b0434462aa4355c9a3395a18cf9b8f8677e97b2ca5db93f935"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.618060 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" event={"ID":"12eb0c01-c4f3-489f-87dd-bbc03f111814","Type":"ContainerStarted","Data":"eaac5fb7c0f14683dcc53979d278fe30072c6a3fde497bb6c4ead0ac71eb7de1"} Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.633597 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptqs7" podStartSLOduration=129.633577136 podStartE2EDuration="2m9.633577136s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:00.631702597 +0000 UTC m=+150.330277507" watchObservedRunningTime="2026-01-26 11:20:00.633577136 +0000 UTC m=+150.332152046" Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.667099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.668490 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.168476823 +0000 UTC m=+150.867051733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.681582 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.711713 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9jc4l"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.711763 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jvs97"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.715786 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gn8gp"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.722274 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz"] Jan 26 11:20:00 crc kubenswrapper[4867]: W0126 11:20:00.740624 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24814471_72bd_4b41_9615_49f2f7115d9f.slice/crio-9c650ff919d92a408f13dd143bb3b64149911b2bac88ee14dc6231e0a857eaae WatchSource:0}: Error finding container 9c650ff919d92a408f13dd143bb3b64149911b2bac88ee14dc6231e0a857eaae: Status 404 returned error can't find the container with id 9c650ff919d92a408f13dd143bb3b64149911b2bac88ee14dc6231e0a857eaae Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.769461 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.770288 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.270238863 +0000 UTC m=+150.968813783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.771122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.788843 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.288814122 +0000 UTC m=+150.987389032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.854207 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.882477 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.885155 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.889478 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bmqm4"] Jan 26 11:20:00 crc kubenswrapper[4867]: W0126 11:20:00.890621 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6670fa93_70e2_4047_b449_1bf939336210.slice/crio-3dee0500a17649729de299830f2db44eba3eceb8359430ab5d497f75d0895f48 WatchSource:0}: Error finding container 3dee0500a17649729de299830f2db44eba3eceb8359430ab5d497f75d0895f48: Status 404 returned error can't find the container with id 3dee0500a17649729de299830f2db44eba3eceb8359430ab5d497f75d0895f48 Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.893670 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.393635483 +0000 UTC m=+151.092210393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.899632 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg"] Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.954531 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:20:00 crc kubenswrapper[4867]: I0126 11:20:00.994467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:00 crc kubenswrapper[4867]: E0126 11:20:00.994838 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.494824158 +0000 UTC m=+151.193399068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.049651 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 11:15:00 +0000 UTC, rotation deadline is 2026-10-15 00:08:41.642307431 +0000 UTC Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.049718 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6276h48m40.59259402s for next certificate rotation Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.096594 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.097073 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.597045081 +0000 UTC m=+151.295620001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.124817 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.124880 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ltvwb"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.127014 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.133007 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.136277 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jczt5"] Jan 26 11:20:01 crc kubenswrapper[4867]: W0126 11:20:01.170912 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3f1f482_470d_4521_b4ab_76bdd0e795d0.slice/crio-885f4c9359d68934900290d7c5ddc4692b1b659729807207cdad31ad994aa490 WatchSource:0}: Error finding container 885f4c9359d68934900290d7c5ddc4692b1b659729807207cdad31ad994aa490: Status 404 returned error can't find the container with id 885f4c9359d68934900290d7c5ddc4692b1b659729807207cdad31ad994aa490 Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.205341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.205804 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.705782607 +0000 UTC m=+151.404357517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.246012 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.246133 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xzrd8" podStartSLOduration=130.24611223 podStartE2EDuration="2m10.24611223s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:01.240437508 +0000 UTC m=+150.939012418" watchObservedRunningTime="2026-01-26 11:20:01.24611223 +0000 UTC m=+150.944687140" Jan 26 11:20:01 crc kubenswrapper[4867]: W0126 11:20:01.248742 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a27bc25_3df1_4dd2_a51d_de8e2bb5070e.slice/crio-c5a72a3525e579de6d454783f04860bd19f4f747a45cd33a2c05acbe0bd06310 WatchSource:0}: Error finding container c5a72a3525e579de6d454783f04860bd19f4f747a45cd33a2c05acbe0bd06310: Status 404 returned error can't find the container with id c5a72a3525e579de6d454783f04860bd19f4f747a45cd33a2c05acbe0bd06310 Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.287385 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vtd7b" podStartSLOduration=130.287355147 podStartE2EDuration="2m10.287355147s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:01.283378249 +0000 UTC m=+150.981953169" watchObservedRunningTime="2026-01-26 11:20:01.287355147 +0000 UTC m=+150.985930057" Jan 26 11:20:01 crc kubenswrapper[4867]: W0126 11:20:01.288051 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a2cb6a_ca89_45ea_a985_ce216707b50e.slice/crio-01b4b05f74912e5dfb523d23cffac3d5b0177d0d8aa84e98acdfa2ccf5bb5720 WatchSource:0}: Error finding container 01b4b05f74912e5dfb523d23cffac3d5b0177d0d8aa84e98acdfa2ccf5bb5720: Status 404 returned error can't find the container with id 01b4b05f74912e5dfb523d23cffac3d5b0177d0d8aa84e98acdfa2ccf5bb5720 Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.307027 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.307415 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.807345963 +0000 UTC m=+151.505920883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.308960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.309001 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.808973526 +0000 UTC m=+151.507548436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.417040 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:01 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:01 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:01 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.417141 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.418351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.425301 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:01.922207754 +0000 UTC m=+151.620782664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.431919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.431965 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8rqgh"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.456203 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.458864 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.461232 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.464594 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s6j6d"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.468960 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.470752 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pb5rg" podStartSLOduration=129.470722236 podStartE2EDuration="2m9.470722236s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:01.463275575 +0000 UTC m=+151.161850495" watchObservedRunningTime="2026-01-26 11:20:01.470722236 +0000 UTC m=+151.169297146" Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.520180 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.520588 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.020573573 +0000 UTC m=+151.719148483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.527461 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4"] Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.621993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.622093 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" event={"ID":"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4","Type":"ContainerStarted","Data":"4c72ffcec8654c5611bec22497ef9068ba63dcd07bc1fb5c48913985a2692977"} Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.622825 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.122802985 +0000 UTC m=+151.821377895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.623179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" event={"ID":"2131da56-d7d3-4b0d-b134-86c8dbcad2a6","Type":"ContainerStarted","Data":"770efa096a9455507891bfec3aa1fabdee2235234f252a8db87bf47eb5bc60de"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.623910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" event={"ID":"69a2cb6a-ca89-45ea-a985-ce216707b50e","Type":"ContainerStarted","Data":"01b4b05f74912e5dfb523d23cffac3d5b0177d0d8aa84e98acdfa2ccf5bb5720"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.624785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" event={"ID":"b0ef109c-57cb-46c0-958b-fe33b8cdae0b","Type":"ContainerStarted","Data":"6073bb9b140db5093e99bd95ffa1cccbca93a9ed654d625d615aa91976061eee"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.625558 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" event={"ID":"702e97d5-258a-4ec8-bc8f-cc700c16f813","Type":"ContainerStarted","Data":"2a9b54fbeb941244f5b14787e4d712e34c91a66281ed426f5d6fc5482561f49f"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.626714 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" event={"ID":"c4a78b64-f941-4ffa-bb41-2b035c2fbdee","Type":"ContainerStarted","Data":"28b6f3742563ce1e60fb1d34659b3732629538badb2b0c6a7e9e6d540a4f7f7a"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.629332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jczt5" event={"ID":"f3f1f482-470d-4521-b4ab-76bdd0e795d0","Type":"ContainerStarted","Data":"885f4c9359d68934900290d7c5ddc4692b1b659729807207cdad31ad994aa490"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.630547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" event={"ID":"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e","Type":"ContainerStarted","Data":"542031df802159159647eac884a2ba6b8110d29ca56ce7aba5ab3ef1df634fcb"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.631389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" event={"ID":"485f8610-85d7-44ef-9ed2-719f3d409a58","Type":"ContainerStarted","Data":"7670e194038b624d934c1a8ed9077918ffa9d8ab1a374a0203601571362c8f9f"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.632316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" event={"ID":"6203c5b2-2d8f-46c5-a31c-59190d111d7d","Type":"ContainerStarted","Data":"f669c662e790544b078ddb8686e34d48dc6276fabf58ab31d2502699f2dfc6bc"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.633279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" event={"ID":"27074e02-cda1-4d86-bef7-69aafc47ad94","Type":"ContainerStarted","Data":"54b1f23218469eacc3fe69cee5124e964e2212db2af01c78f2a88b0c27ff6c15"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.634870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" event={"ID":"a91b5a18-2743-473f-8116-5fb1e348d05c","Type":"ContainerStarted","Data":"c7845838c24acaade17cb50361911deb057f972e499e1b98631dd4b1b197f346"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.636989 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" event={"ID":"6670fa93-70e2-4047-b449-1bf939336210","Type":"ContainerStarted","Data":"3dee0500a17649729de299830f2db44eba3eceb8359430ab5d497f75d0895f48"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.637653 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ltvwb" event={"ID":"6a27bc25-3df1-4dd2-a51d-de8e2bb5070e","Type":"ContainerStarted","Data":"c5a72a3525e579de6d454783f04860bd19f4f747a45cd33a2c05acbe0bd06310"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.638321 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qhjqn" event={"ID":"60781074-ebcf-45cb-9a12-9193995071c1","Type":"ContainerStarted","Data":"5e27f31764cbd97e7fbba1a941893fbd99a2329da917dca8c8dff7d80b6dae03"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.639055 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" event={"ID":"24814471-72bd-4b41-9615-49f2f7115d9f","Type":"ContainerStarted","Data":"9c650ff919d92a408f13dd143bb3b64149911b2bac88ee14dc6231e0a857eaae"} Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.672911 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dmt7q" podStartSLOduration=130.672887199 podStartE2EDuration="2m10.672887199s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:01.669499559 +0000 UTC m=+151.368074469" watchObservedRunningTime="2026-01-26 11:20:01.672887199 +0000 UTC m=+151.371462109" Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.723499 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.723942 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.223922878 +0000 UTC m=+151.922497788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.825098 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.825402 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.32534604 +0000 UTC m=+152.023920950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.826861 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.828447 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.328412242 +0000 UTC m=+152.026987362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.830199 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" podStartSLOduration=129.830170439 podStartE2EDuration="2m9.830170439s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:01.828180716 +0000 UTC m=+151.526755616" watchObservedRunningTime="2026-01-26 11:20:01.830170439 +0000 UTC m=+151.528745349" Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.929433 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.929699 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.429660678 +0000 UTC m=+152.128235618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:01 crc kubenswrapper[4867]: I0126 11:20:01.930565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:01 crc kubenswrapper[4867]: E0126 11:20:01.931163 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.431144178 +0000 UTC m=+152.129719088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.031731 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.032257 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.532216269 +0000 UTC m=+152.230791179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.133132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.133536 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.633520598 +0000 UTC m=+152.332095508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.234827 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.235780 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.73574618 +0000 UTC m=+152.434321120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.236107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.236746 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.736714475 +0000 UTC m=+152.435289415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.337566 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.337880 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.837827359 +0000 UTC m=+152.536402279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.337997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.338563 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.838543738 +0000 UTC m=+152.537118678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.397329 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:02 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:02 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:02 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.397444 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.439125 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.439580 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.939520317 +0000 UTC m=+152.638095237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.439968 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.440539 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:02.940515773 +0000 UTC m=+152.639090683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.541815 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.542064 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.042027657 +0000 UTC m=+152.740602567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.542353 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.542788 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.042777737 +0000 UTC m=+152.741352647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.643419 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.643668 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.143624483 +0000 UTC m=+152.842199393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.643846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.644354 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.144345942 +0000 UTC m=+152.842920852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.750307 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.751575 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.251553868 +0000 UTC m=+152.950128778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.756804 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zvpfm"] Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.756869 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk"] Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.793810 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ksw9r"] Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.816672 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nmb9m"] Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.829126 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn"] Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.833589 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrqxh"] Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.836195 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h"] Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.852458 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.852851 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.352836185 +0000 UTC m=+153.051411095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:02 crc kubenswrapper[4867]: I0126 11:20:02.953953 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:02 crc kubenswrapper[4867]: E0126 11:20:02.954364 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.454346319 +0000 UTC m=+153.152921219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: W0126 11:20:03.030652 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64c6d7e3_5fb6_4242_b616_2628ca519c8e.slice/crio-4f3f66556ef8fb33aceb669fe25d280f7c339e4e070a22906ea33e5c940051d9 WatchSource:0}: Error finding container 4f3f66556ef8fb33aceb669fe25d280f7c339e4e070a22906ea33e5c940051d9: Status 404 returned error can't find the container with id 4f3f66556ef8fb33aceb669fe25d280f7c339e4e070a22906ea33e5c940051d9 Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.059875 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.060313 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.560297491 +0000 UTC m=+153.258872401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.163985 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.164118 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.664091776 +0000 UTC m=+153.362666686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.164565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.164983 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.66497147 +0000 UTC m=+153.363546380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.268549 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.268848 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.768829486 +0000 UTC m=+153.467404396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.369646 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.370107 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.870082402 +0000 UTC m=+153.568657312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.405024 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:03 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:03 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:03 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.405123 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.471324 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.971279037 +0000 UTC m=+153.669853957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.471049 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.472674 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.473340 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:03.973328502 +0000 UTC m=+153.671903582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.575647 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.576014 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.075980747 +0000 UTC m=+153.774555647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.576064 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.577439 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.077427875 +0000 UTC m=+153.776002785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.673663 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8rqgh" event={"ID":"4f71f9b3-6264-4e4b-876d-bf61a930a9e5","Type":"ContainerStarted","Data":"717d496e4be9a001983e56ecbb936b6dd3f6cc220ec7bbdb3594cdc8c3961c1e"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.675144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" event={"ID":"53dbe0f8-7ef4-4a92-b25a-3d052c747202","Type":"ContainerStarted","Data":"2bc68adee03179735ca021f9e1e42ff0641348d64c41bc1632fca097863b6b37"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.678186 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.701686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" event={"ID":"6670fa93-70e2-4047-b449-1bf939336210","Type":"ContainerStarted","Data":"f4e414b6a8d8800939da7c8e93908abd837618d05e40109cc91a71f9a6a53344"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.703428 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.705742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" event={"ID":"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7","Type":"ContainerStarted","Data":"a753da8b4b00decc727b14c8aef80a82364ea21a75c5e67210c22d4276dd05cb"} Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.678685 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.17863571 +0000 UTC m=+153.877210630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.736813 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9jc4l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.736925 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" podUID="6670fa93-70e2-4047-b449-1bf939336210" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.774816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" event={"ID":"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0","Type":"ContainerStarted","Data":"31911d9b48072b4db4141cba4c6392a7a620d0713a8fccb2d8d65ca959a052b9"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.780685 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.783061 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.283046411 +0000 UTC m=+153.981621521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.791848 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" event={"ID":"2131da56-d7d3-4b0d-b134-86c8dbcad2a6","Type":"ContainerStarted","Data":"e2d171bbc28f269a9f754e9b5fe386fb42c10b74d7b85470e70510df48c7cbc0"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.809976 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" event={"ID":"6ab404f5-5b14-49d4-80f4-2a84895d0a2f","Type":"ContainerStarted","Data":"2826e4fe61c3d7fe19de295670118775762f2e6d5ccf1e5b369ff1944f2d251b"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.823651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" event={"ID":"4c7998be-4b54-46dc-9791-045c502be976","Type":"ContainerStarted","Data":"ee804ca90cb9bbb782b1e12708d502c2811fb4e65cd93a0ac83cc659d85ac14b"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.828210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" event={"ID":"12eb0c01-c4f3-489f-87dd-bbc03f111814","Type":"ContainerStarted","Data":"3c5397983815ae5376981c91f1cb09a4f4467f1b064a0dd9de1546ad0c2a559b"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.835964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" event={"ID":"0b3d9b9e-a34f-417b-9b20-8b3565e7da51","Type":"ContainerStarted","Data":"0ab690450dc9e129f7ab26fd2e7fd27369c86ef859178289eaa8e50d88d5f918"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.861799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" event={"ID":"f7d11034-ad81-48b6-bf3b-8597910b1adf","Type":"ContainerStarted","Data":"97e93c83fedf889e601c86c6ce00bc74fd1f407691debe52d5a3c62ab5585f9b"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.869378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" event={"ID":"6203c5b2-2d8f-46c5-a31c-59190d111d7d","Type":"ContainerDied","Data":"976685aaf3ae481391b1c7fccd996c16457592cb446b6ee931ad136d54bc4e56"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.870750 4867 generic.go:334] "Generic (PLEG): container finished" podID="6203c5b2-2d8f-46c5-a31c-59190d111d7d" containerID="976685aaf3ae481391b1c7fccd996c16457592cb446b6ee931ad136d54bc4e56" exitCode=0 Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.882242 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.882875 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.382859339 +0000 UTC m=+154.081434249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.887432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nmb9m" event={"ID":"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc","Type":"ContainerStarted","Data":"ab8044d992f1fbccef249efc991bb6154761b372ae7e41c5a9c43d8e4e564a84"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.896467 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" event={"ID":"afe23588-98b0-47bf-9092-849d7e2e5f98","Type":"ContainerStarted","Data":"72553a54f4d4a7c349d6d1279eb3dc753a6b9a0822ebda67256e87e5771067a2"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.906001 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" podStartSLOduration=132.905977409 podStartE2EDuration="2m12.905977409s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:03.756737086 +0000 UTC m=+153.455311996" watchObservedRunningTime="2026-01-26 11:20:03.905977409 +0000 UTC m=+153.604552319" Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.913316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" event={"ID":"c4a78b64-f941-4ffa-bb41-2b035c2fbdee","Type":"ContainerStarted","Data":"9d390ec8091bb8fad16c2a917b47aa7a67ba0e98893fb774056ec596783b71ca"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.919316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" event={"ID":"8b27feea-5afc-4d14-969e-cbbe2047025e","Type":"ContainerStarted","Data":"d71a01584fb720ff90110a8f4d5c46974b57189839b98726539b1dda7ddf7024"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.920271 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" event={"ID":"f5bb657a-0790-4c81-b7bd-861e297bbaeb","Type":"ContainerStarted","Data":"a10d1d841a632995036db5af409486e3b6392555092bca67df3886aceb49b17e"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.920980 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" event={"ID":"ca11705a-ad86-4b81-87b6-fba88013e723","Type":"ContainerStarted","Data":"eb71ebeccf3f3948bd34b1589c227dd0081bef36ad96e873771a9f699c6c1756"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.922091 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" event={"ID":"27074e02-cda1-4d86-bef7-69aafc47ad94","Type":"ContainerStarted","Data":"29cbcc8e76e698469ed383e9f97d8342422e830e833f43a0b1a8d8a4e489e7a6"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.925697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dc94j" event={"ID":"a721247b-3436-4bb4-bc5c-ab4e94db0b41","Type":"ContainerStarted","Data":"4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.938056 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" event={"ID":"64c6d7e3-5fb6-4242-b616-2628ca519c8e","Type":"ContainerStarted","Data":"4f3f66556ef8fb33aceb669fe25d280f7c339e4e070a22906ea33e5c940051d9"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.948783 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" event={"ID":"95f962fb-c0fe-4583-8d7f-cac4f22110e9","Type":"ContainerStarted","Data":"564727ac625461e7d90cd974ab2da2ec831edffc49be3e7a2d0d63c1255adfe2"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.948947 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.974505 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fg7cn" podStartSLOduration=131.974476827 podStartE2EDuration="2m11.974476827s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:03.948172921 +0000 UTC m=+153.646747831" watchObservedRunningTime="2026-01-26 11:20:03.974476827 +0000 UTC m=+153.673051737" Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.974709 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" event={"ID":"24814471-72bd-4b41-9615-49f2f7115d9f","Type":"ContainerStarted","Data":"a71bd6c930c1c6ce3b971626648cc60358bb9681416842606a5e7e7f98ccb5c5"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.984732 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:03 crc kubenswrapper[4867]: E0126 11:20:03.986830 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.486813748 +0000 UTC m=+154.185388648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.991695 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" event={"ID":"702e97d5-258a-4ec8-bc8f-cc700c16f813","Type":"ContainerStarted","Data":"c78980ccf3f7e3380ac577d51e00d14e9956d2e3c5542fe31f77c34cf67a547c"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.994747 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" event={"ID":"4f064632-2f38-4059-b361-aa528f19ddeb","Type":"ContainerStarted","Data":"602b0aaf56a3feba9e7d4a2821ad9722c37746240d37b9e62d7233e23296e3f6"} Jan 26 11:20:03 crc kubenswrapper[4867]: I0126 11:20:03.996392 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qhjqn" event={"ID":"60781074-ebcf-45cb-9a12-9193995071c1","Type":"ContainerStarted","Data":"689ed3e0fe08cce08c651679ab06daebe32ed5e4bc341a1fce0ca43684fb812d"} Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.002816 4867 generic.go:334] "Generic (PLEG): container finished" podID="a4874120-574e-4f70-a7d9-5c6c91e41f41" containerID="400b5c70cf572d6b8a622265a36b4216fde821807820a6d3dfcd8ff9a0ee45a6" exitCode=0 Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.002946 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" event={"ID":"a4874120-574e-4f70-a7d9-5c6c91e41f41","Type":"ContainerDied","Data":"400b5c70cf572d6b8a622265a36b4216fde821807820a6d3dfcd8ff9a0ee45a6"} Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.003238 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" podStartSLOduration=133.003195908 podStartE2EDuration="2m13.003195908s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:04.002273193 +0000 UTC m=+153.700848103" watchObservedRunningTime="2026-01-26 11:20:04.003195908 +0000 UTC m=+153.701770818" Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.005188 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dc94j" podStartSLOduration=133.005171581 podStartE2EDuration="2m13.005171581s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:03.973902192 +0000 UTC m=+153.672477102" watchObservedRunningTime="2026-01-26 11:20:04.005171581 +0000 UTC m=+153.703746491" Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.026306 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gn8gp" podStartSLOduration=133.026282607 podStartE2EDuration="2m13.026282607s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:04.024611983 +0000 UTC m=+153.723186893" watchObservedRunningTime="2026-01-26 11:20:04.026282607 +0000 UTC m=+153.724857517" Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.046047 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qhjqn" podStartSLOduration=8.046023216 podStartE2EDuration="8.046023216s" podCreationTimestamp="2026-01-26 11:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:04.045970165 +0000 UTC m=+153.744545085" watchObservedRunningTime="2026-01-26 11:20:04.046023216 +0000 UTC m=+153.744598126" Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.083202 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6vjzt" podStartSLOduration=132.083181174 podStartE2EDuration="2m12.083181174s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:04.081706034 +0000 UTC m=+153.780280954" watchObservedRunningTime="2026-01-26 11:20:04.083181174 +0000 UTC m=+153.781756084" Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.087356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.087746 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.587731205 +0000 UTC m=+154.286306105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.155096 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" podStartSLOduration=133.155065012 podStartE2EDuration="2m13.155065012s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:04.110416654 +0000 UTC m=+153.808991564" watchObservedRunningTime="2026-01-26 11:20:04.155065012 +0000 UTC m=+153.853639922" Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.191513 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.199401 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.699378071 +0000 UTC m=+154.397952981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.300840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.301025 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.800995407 +0000 UTC m=+154.499570317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.301796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.302345 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.802334173 +0000 UTC m=+154.500909083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.408983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.409492 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:04 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:04 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:04 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.409545 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.410064 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.910041452 +0000 UTC m=+154.608616362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.410448 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.410950 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:04.910941937 +0000 UTC m=+154.609516847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.515250 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.516262 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.016219661 +0000 UTC m=+154.714794571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.619382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.625416 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.12539719 +0000 UTC m=+154.823972100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.725122 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.725443 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.225400813 +0000 UTC m=+154.923975713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.725783 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.726109 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.226092322 +0000 UTC m=+154.924667232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.827050 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.827537 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.327520273 +0000 UTC m=+155.026095183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:04 crc kubenswrapper[4867]: I0126 11:20:04.929304 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:04 crc kubenswrapper[4867]: E0126 11:20:04.929760 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.429743796 +0000 UTC m=+155.128318706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.031915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" event={"ID":"ca11705a-ad86-4b81-87b6-fba88013e723","Type":"ContainerStarted","Data":"986064c2486bdf5483bebfe9f2dce347c0b25ef0580da14daca6a537e0802f42"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.032999 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.033469 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.533451108 +0000 UTC m=+155.232026018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.066253 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8rqgh" event={"ID":"4f71f9b3-6264-4e4b-876d-bf61a930a9e5","Type":"ContainerStarted","Data":"735fb0a431987253903e4b04b928d7b01f03a770fc8462a17fa263dd4137875d"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.076266 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" event={"ID":"4f064632-2f38-4059-b361-aa528f19ddeb","Type":"ContainerStarted","Data":"626ae92d0713cd6944a86a7a5663fd2d5a3f59d4dd58216115c0fb0c52c90a70"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.098779 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" event={"ID":"0b3d9b9e-a34f-417b-9b20-8b3565e7da51","Type":"ContainerStarted","Data":"a9bd0853179839d75eb7fff12a4b23d73f1d9771f225e261e3228eec209d03ae"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.099382 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.134238 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.134596 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.634583731 +0000 UTC m=+155.333158641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.139797 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" podStartSLOduration=133.13974916 podStartE2EDuration="2m13.13974916s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.133511573 +0000 UTC m=+154.832086483" watchObservedRunningTime="2026-01-26 11:20:05.13974916 +0000 UTC m=+154.838324070" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.152803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" event={"ID":"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7","Type":"ContainerStarted","Data":"e664a0f9e90bd07755a08843d0b940749765f5853f4761454fa6356d4a25f62b"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.152879 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" event={"ID":"53c0e980-1e5a-44d8-a5fd-d29fd63cfce7","Type":"ContainerStarted","Data":"fb0dca5a96bcdfb371c7e200c5e97a92371fdd50e9bd4d15bd2f20d03eba9c06"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.152936 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.161405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" event={"ID":"6ab404f5-5b14-49d4-80f4-2a84895d0a2f","Type":"ContainerStarted","Data":"4deb98f17f433fd2b4b2ffb352d38a21e7a46d7680d0cdcd4e67da663af753b1"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.162993 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.166464 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hrqxh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.166515 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.169408 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" event={"ID":"6203c5b2-2d8f-46c5-a31c-59190d111d7d","Type":"ContainerStarted","Data":"bbbdf65404d8848b0dea44295cda085f69d944ab29aa0442890c2227cd218b93"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.188003 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" event={"ID":"3c13b8a9-f9d1-409e-9a46-d3cfcfd4d9b0","Type":"ContainerStarted","Data":"009bdfb824c654b951d715c8890749da19a8bfb0ebb71014dcc79640097ad949"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.189139 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.192859 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" podStartSLOduration=133.192845634 podStartE2EDuration="2m13.192845634s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.190970144 +0000 UTC m=+154.889545054" watchObservedRunningTime="2026-01-26 11:20:05.192845634 +0000 UTC m=+154.891420544" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.207571 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jczt5" event={"ID":"f3f1f482-470d-4521-b4ab-76bdd0e795d0","Type":"ContainerStarted","Data":"fe3fbc94649a0da78b0c1221f34ef3f66635a5d66625cc5a9e813831cb3230ba"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.209138 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.218154 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" podStartSLOduration=133.218134793 podStartE2EDuration="2m13.218134793s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.216060477 +0000 UTC m=+154.914635387" watchObservedRunningTime="2026-01-26 11:20:05.218134793 +0000 UTC m=+154.916709703" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.228346 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nmb9m" event={"ID":"0c800e71-0744-45b6-9c5a-f0c3dd9e6adc","Type":"ContainerStarted","Data":"65b62555bc7bb45d692d631865efa8e3e3b1fb0629020d97d8bfa4d598319f46"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.238280 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.239872 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.739849605 +0000 UTC m=+155.438424565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.240080 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" event={"ID":"69a2cb6a-ca89-45ea-a985-ce216707b50e","Type":"ContainerStarted","Data":"3f1d64041b81dbe6d5b346a6ecca79c0b61718b2f9b4db458debdfbb99670c13"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.240138 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" event={"ID":"69a2cb6a-ca89-45ea-a985-ce216707b50e","Type":"ContainerStarted","Data":"bf06c0a55fcc34f70efb33b50bb653aca94e0a1db747602fdccb5cf2ce149727"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.256642 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" podStartSLOduration=133.256616195 podStartE2EDuration="2m13.256616195s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.256256735 +0000 UTC m=+154.954831645" watchObservedRunningTime="2026-01-26 11:20:05.256616195 +0000 UTC m=+154.955191105" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.268838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" event={"ID":"f5bb657a-0790-4c81-b7bd-861e297bbaeb","Type":"ContainerStarted","Data":"eb3a7327de7a2e09aac5e06d5dab3d99d7e44502630524c073039f8b244e7391"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.286611 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jczt5" podStartSLOduration=134.286586519 podStartE2EDuration="2m14.286586519s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.283974439 +0000 UTC m=+154.982549359" watchObservedRunningTime="2026-01-26 11:20:05.286586519 +0000 UTC m=+154.985161429" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.288818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" event={"ID":"12eb0c01-c4f3-489f-87dd-bbc03f111814","Type":"ContainerStarted","Data":"0f4bc6bd0ce1b1621acf97fa912595135dc82a09f0d58605c158b3b9def80185"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.304549 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" event={"ID":"4c7998be-4b54-46dc-9791-045c502be976","Type":"ContainerStarted","Data":"1d4db703539d7cadf3fe901b6832f75267651e78e50a12e4391275fc18aa7710"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.316883 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" event={"ID":"53dbe0f8-7ef4-4a92-b25a-3d052c747202","Type":"ContainerStarted","Data":"97d18219653225bc13aaeb33a6981dd06853e695386e9fdcde1f6c5bfe7b8563"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.316930 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" event={"ID":"53dbe0f8-7ef4-4a92-b25a-3d052c747202","Type":"ContainerStarted","Data":"2b9493a892bd6f7d05035ce4fc41a3d34b83cb65f01dad319dff9bdc4397a541"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.337978 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs9ns" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.342883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.346761 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.846739453 +0000 UTC m=+155.545314363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.358942 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lg6cl" podStartSLOduration=133.358915379 podStartE2EDuration="2m13.358915379s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.356006791 +0000 UTC m=+155.054581701" watchObservedRunningTime="2026-01-26 11:20:05.358915379 +0000 UTC m=+155.057490289" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.360938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" event={"ID":"a4874120-574e-4f70-a7d9-5c6c91e41f41","Type":"ContainerStarted","Data":"46b210231d4d3121c183038a2b5de6bcc951b9cd4744f39d409a5a8ef2509eb8"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.376968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" event={"ID":"8b27feea-5afc-4d14-969e-cbbe2047025e","Type":"ContainerStarted","Data":"a97b4559d000e875389f440c1f817065eadedc2185d481a170c0798907567112"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.378062 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.386793 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" event={"ID":"64c6d7e3-5fb6-4242-b616-2628ca519c8e","Type":"ContainerStarted","Data":"172d104e20afa961a17964f863eab3270fbfc0738a332d7f23c11d134a0ffdbe"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.395477 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" event={"ID":"afe23588-98b0-47bf-9092-849d7e2e5f98","Type":"ContainerStarted","Data":"5b0c92c975f3dc994d53f7beeea62316b551d6a10bebda2ab8ebdd544071f1e7"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.395821 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" event={"ID":"afe23588-98b0-47bf-9092-849d7e2e5f98","Type":"ContainerStarted","Data":"1babdce4e990a4d5507cec94c94af4d2f44e181e042a7b714147c967ef7a8bbc"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.397578 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:05 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:05 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:05 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.397812 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.398374 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.446783 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.447128 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.947072914 +0000 UTC m=+155.645647824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.447675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.449811 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:05.949795447 +0000 UTC m=+155.648370537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.449805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ltvwb" event={"ID":"6a27bc25-3df1-4dd2-a51d-de8e2bb5070e","Type":"ContainerStarted","Data":"ed1a5d61ba7be1cd04e271c1c9c4e921b09a717128d1122ae7ba7c3c3f4efd1a"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.450371 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ltvwb" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.467433 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nmb9m" podStartSLOduration=9.46740119 podStartE2EDuration="9.46740119s" podCreationTimestamp="2026-01-26 11:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.408638984 +0000 UTC m=+155.107213894" watchObservedRunningTime="2026-01-26 11:20:05.46740119 +0000 UTC m=+155.165976100" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.479460 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltvwb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.479546 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltvwb" podUID="6a27bc25-3df1-4dd2-a51d-de8e2bb5070e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.526473 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" event={"ID":"81b0efbf-3d4c-4f0f-b2bd-84b5af701c2e","Type":"ContainerStarted","Data":"466d245b7774ae5c0b7b58594b2097b27d0ef7a236041c4b20bb9654ac0f8ffe"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.543975 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" event={"ID":"50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4","Type":"ContainerStarted","Data":"80119436080d2c83cd58b806e9ab045a46c6b5512bae43e4bea788a6b4709df1"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.548814 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.549620 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.049604275 +0000 UTC m=+155.748179185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.565337 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4hjn2" podStartSLOduration=134.565311527 podStartE2EDuration="2m14.565311527s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.542817874 +0000 UTC m=+155.241392784" watchObservedRunningTime="2026-01-26 11:20:05.565311527 +0000 UTC m=+155.263886437" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.566347 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" podStartSLOduration=133.566341844 podStartE2EDuration="2m13.566341844s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.469507576 +0000 UTC m=+155.168082496" watchObservedRunningTime="2026-01-26 11:20:05.566341844 +0000 UTC m=+155.264916754" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.590637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" event={"ID":"2131da56-d7d3-4b0d-b134-86c8dbcad2a6","Type":"ContainerStarted","Data":"af2794b7991dec306d6a11ace7e4a63a5765d79f39dade89ab9e51ad5dffe5f7"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.619644 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lrrdf" podStartSLOduration=133.619620614 podStartE2EDuration="2m13.619620614s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.611349922 +0000 UTC m=+155.309924832" watchObservedRunningTime="2026-01-26 11:20:05.619620614 +0000 UTC m=+155.318195524" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.636813 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" event={"ID":"485f8610-85d7-44ef-9ed2-719f3d409a58","Type":"ContainerStarted","Data":"2f04b73f9d1e42a9e175604f42ba71f0b1430f4dc10f98dd20438a4cf000ed5d"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.645353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" event={"ID":"b0ef109c-57cb-46c0-958b-fe33b8cdae0b","Type":"ContainerStarted","Data":"f38f4e2c0f5ced4c7f4ad828c032eb41ba6944271abcb1a6597db1f70b0c86ad"} Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.656910 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.666306 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.166285406 +0000 UTC m=+155.864860316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.668822 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.690119 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-ksw9r" podStartSLOduration=133.690082864 podStartE2EDuration="2m13.690082864s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.666173493 +0000 UTC m=+155.364748413" watchObservedRunningTime="2026-01-26 11:20:05.690082864 +0000 UTC m=+155.388657774" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.699979 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9r5x7" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.766846 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.776172 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.276143193 +0000 UTC m=+155.974718103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.826714 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l4cqk" podStartSLOduration=133.826681339 podStartE2EDuration="2m13.826681339s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.824718946 +0000 UTC m=+155.523293856" watchObservedRunningTime="2026-01-26 11:20:05.826681339 +0000 UTC m=+155.525256249" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.827932 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b2qk2" podStartSLOduration=133.827924312 podStartE2EDuration="2m13.827924312s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.7611228 +0000 UTC m=+155.459697720" watchObservedRunningTime="2026-01-26 11:20:05.827924312 +0000 UTC m=+155.526499222" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.880518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.880947 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.380930854 +0000 UTC m=+156.079505764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.885340 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" podStartSLOduration=134.885309432 podStartE2EDuration="2m14.885309432s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.884616084 +0000 UTC m=+155.583190994" watchObservedRunningTime="2026-01-26 11:20:05.885309432 +0000 UTC m=+155.583884342" Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.982509 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:05 crc kubenswrapper[4867]: E0126 11:20:05.982943 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.482924241 +0000 UTC m=+156.181499151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:05 crc kubenswrapper[4867]: I0126 11:20:05.984692 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f7zk4" podStartSLOduration=133.984678687 podStartE2EDuration="2m13.984678687s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:05.983506476 +0000 UTC m=+155.682081386" watchObservedRunningTime="2026-01-26 11:20:05.984678687 +0000 UTC m=+155.683253597" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.038954 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bg5n8" podStartSLOduration=134.038908063 podStartE2EDuration="2m14.038908063s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.03770069 +0000 UTC m=+155.736275610" watchObservedRunningTime="2026-01-26 11:20:06.038908063 +0000 UTC m=+155.737482983" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.087318 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.087900 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.587849695 +0000 UTC m=+156.286424615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.090333 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" podStartSLOduration=135.090302421 podStartE2EDuration="2m15.090302421s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.088366709 +0000 UTC m=+155.786941649" watchObservedRunningTime="2026-01-26 11:20:06.090302421 +0000 UTC m=+155.788877331" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.099614 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rztlw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.099679 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" podUID="0b3d9b9e-a34f-417b-9b20-8b3565e7da51" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.168466 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ltvwb" podStartSLOduration=135.168447778 podStartE2EDuration="2m15.168447778s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.165688204 +0000 UTC m=+155.864263114" watchObservedRunningTime="2026-01-26 11:20:06.168447778 +0000 UTC m=+155.867022688" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.188324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.189349 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.689244375 +0000 UTC m=+156.387819295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.210365 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-jczt5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.210466 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jczt5" podUID="f3f1f482-470d-4521-b4ab-76bdd0e795d0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.220603 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bj4j" podStartSLOduration=134.220587367 podStartE2EDuration="2m14.220587367s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.218759358 +0000 UTC m=+155.917334268" watchObservedRunningTime="2026-01-26 11:20:06.220587367 +0000 UTC m=+155.919162277" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.290852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.291328 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.791313154 +0000 UTC m=+156.489888064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.294680 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.294767 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.357632 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-pgkmf" podStartSLOduration=135.357599402 podStartE2EDuration="2m15.357599402s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.357221262 +0000 UTC m=+156.055796172" watchObservedRunningTime="2026-01-26 11:20:06.357599402 +0000 UTC m=+156.056174312" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.392949 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.393365 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.893345831 +0000 UTC m=+156.591920741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.407107 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:06 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:06 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:06 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.407201 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.499313 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.499823 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:06.999803937 +0000 UTC m=+156.698378847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.524903 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-g6hqf" podStartSLOduration=135.52487561 podStartE2EDuration="2m15.52487561s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.523619397 +0000 UTC m=+156.222194307" watchObservedRunningTime="2026-01-26 11:20:06.52487561 +0000 UTC m=+156.223450520" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.581787 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m22zg" podStartSLOduration=134.581764847 podStartE2EDuration="2m14.581764847s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.580718888 +0000 UTC m=+156.279293798" watchObservedRunningTime="2026-01-26 11:20:06.581764847 +0000 UTC m=+156.280339787" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.600058 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.600264 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.100204631 +0000 UTC m=+156.798779541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.600577 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.601135 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.101117625 +0000 UTC m=+156.799692535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.701815 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.702448 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.202417414 +0000 UTC m=+156.900992344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.722632 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" event={"ID":"f7d11034-ad81-48b6-bf3b-8597910b1adf","Type":"ContainerStarted","Data":"6725448e19304f363074e12c14dfdceeaa66569dc3d4ea4cff2ef242d2048685"} Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.741737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" event={"ID":"ca11705a-ad86-4b81-87b6-fba88013e723","Type":"ContainerStarted","Data":"86c6454080e0cd96667b3c7bbe02889427b1006820cd8e400fd6507842b870b5"} Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.767336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" event={"ID":"6203c5b2-2d8f-46c5-a31c-59190d111d7d","Type":"ContainerStarted","Data":"f38b310a1fdd2b51eab38fd9530f810857648f2975dd6c83882c8054ac7dd772"} Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.785116 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" event={"ID":"4f064632-2f38-4059-b361-aa528f19ddeb","Type":"ContainerStarted","Data":"09e265fd23d43890320cf00a3d195ef5b1403dc84b258005966e7fbbf8744202"} Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.793963 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8rqgh" event={"ID":"4f71f9b3-6264-4e4b-876d-bf61a930a9e5","Type":"ContainerStarted","Data":"703580b5dcba2f15f927d0ed77debc585e58bbd9607d2bbaf1a86b8657e7ee23"} Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.794025 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8rqgh" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.795499 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hrqxh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.795606 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.801790 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltvwb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.801851 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltvwb" podUID="6a27bc25-3df1-4dd2-a51d-de8e2bb5070e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.805397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.828155 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.328131166 +0000 UTC m=+157.026706066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.833420 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jczt5" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.838588 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rztlw" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.894983 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-s6j6d" podStartSLOduration=134.894945988 podStartE2EDuration="2m14.894945988s" podCreationTimestamp="2026-01-26 11:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.80143037 +0000 UTC m=+156.500005280" watchObservedRunningTime="2026-01-26 11:20:06.894945988 +0000 UTC m=+156.593520898" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.899513 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" podStartSLOduration=135.89945946 podStartE2EDuration="2m15.89945946s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.888432894 +0000 UTC m=+156.587007804" watchObservedRunningTime="2026-01-26 11:20:06.89945946 +0000 UTC m=+156.598034380" Jan 26 11:20:06 crc kubenswrapper[4867]: I0126 11:20:06.917017 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:06 crc kubenswrapper[4867]: E0126 11:20:06.921665 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.421633315 +0000 UTC m=+157.120208225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.013665 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8rqgh" podStartSLOduration=11.013636273 podStartE2EDuration="11.013636273s" podCreationTimestamp="2026-01-26 11:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:06.964764471 +0000 UTC m=+156.663339391" watchObservedRunningTime="2026-01-26 11:20:07.013636273 +0000 UTC m=+156.712211183" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.026177 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.026894 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.526880088 +0000 UTC m=+157.225454998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.081069 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p7cdz" podStartSLOduration=136.081045592 podStartE2EDuration="2m16.081045592s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:07.016300035 +0000 UTC m=+156.714874945" watchObservedRunningTime="2026-01-26 11:20:07.081045592 +0000 UTC m=+156.779620502" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.128912 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.129699 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.629677756 +0000 UTC m=+157.328252666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.217658 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ndd6w"] Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.219055 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.220896 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.231507 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.231958 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.73194308 +0000 UTC m=+157.430517990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.245430 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndd6w"] Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.332658 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.332954 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.832907929 +0000 UTC m=+157.531482839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.333264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-catalog-content\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.333690 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktvd\" (UniqueName: \"kubernetes.io/projected/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-kube-api-access-2ktvd\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.333850 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.333945 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-utilities\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.334302 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.834292635 +0000 UTC m=+157.532867615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.394925 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:07 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:07 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:07 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.395542 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.405969 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gcljn"] Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.407076 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.411364 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.435821 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.436372 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcljn"] Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.436554 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.936527879 +0000 UTC m=+157.635102789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.436591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ktvd\" (UniqueName: \"kubernetes.io/projected/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-kube-api-access-2ktvd\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.436722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.437037 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-utilities\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.437138 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:07.937115154 +0000 UTC m=+157.635690054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.437205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-catalog-content\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.437725 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-utilities\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.437811 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-catalog-content\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.487492 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ktvd\" (UniqueName: \"kubernetes.io/projected/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-kube-api-access-2ktvd\") pod \"certified-operators-ndd6w\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.537711 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.538200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.538349 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.038325399 +0000 UTC m=+157.736900319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.539735 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-utilities\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.539849 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-catalog-content\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.539878 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.539927 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4699\" (UniqueName: \"kubernetes.io/projected/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-kube-api-access-k4699\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.540364 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.040353994 +0000 UTC m=+157.738928914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.632755 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4lhrp"] Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.633891 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.640301 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.640519 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.14047956 +0000 UTC m=+157.839054460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.641010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjghk\" (UniqueName: \"kubernetes.io/projected/8e8b11fb-b146-4307-b94e-515815b10c58-kube-api-access-rjghk\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.641147 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-catalog-content\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.641289 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.641694 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.141685942 +0000 UTC m=+157.840260852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.641934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4699\" (UniqueName: \"kubernetes.io/projected/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-kube-api-access-k4699\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.642513 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-catalog-content\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.642657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-utilities\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.643070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-utilities\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.642979 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-utilities\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.641783 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-catalog-content\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.666667 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4lhrp"] Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.679181 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4699\" (UniqueName: \"kubernetes.io/projected/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-kube-api-access-k4699\") pod \"community-operators-gcljn\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.722649 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.744589 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.744896 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-utilities\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.744927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjghk\" (UniqueName: \"kubernetes.io/projected/8e8b11fb-b146-4307-b94e-515815b10c58-kube-api-access-rjghk\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.744982 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-catalog-content\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.745888 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-catalog-content\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.745973 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.24595339 +0000 UTC m=+157.944528290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.746172 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-utilities\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.831614 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjghk\" (UniqueName: \"kubernetes.io/projected/8e8b11fb-b146-4307-b94e-515815b10c58-kube-api-access-rjghk\") pod \"certified-operators-4lhrp\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.839796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" event={"ID":"f7d11034-ad81-48b6-bf3b-8597910b1adf","Type":"ContainerStarted","Data":"5e0812f18145ec4c695be7ad325c80851329bf079484d41392faf599b81d52d6"} Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.841411 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hrqxh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.841793 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.841835 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltvwb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.841908 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltvwb" podUID="6a27bc25-3df1-4dd2-a51d-de8e2bb5070e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.852715 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.857861 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.357833291 +0000 UTC m=+158.056408201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.882798 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s8hx9"] Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.893339 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.958862 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.962038 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.962365 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-utilities\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.962461 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-catalog-content\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.962493 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8lfb\" (UniqueName: \"kubernetes.io/projected/adb6bffd-3a41-480b-85df-1f3489ce7007-kube-api-access-c8lfb\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:07 crc kubenswrapper[4867]: E0126 11:20:07.963548 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.463524967 +0000 UTC m=+158.162099877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:07 crc kubenswrapper[4867]: I0126 11:20:07.981511 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8hx9"] Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.027740 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.075406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.075533 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-utilities\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.075580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-catalog-content\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.075628 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8lfb\" (UniqueName: \"kubernetes.io/projected/adb6bffd-3a41-480b-85df-1f3489ce7007-kube-api-access-c8lfb\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.076218 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.57619915 +0000 UTC m=+158.274774060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.076613 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-utilities\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.076687 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-catalog-content\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.137333 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8lfb\" (UniqueName: \"kubernetes.io/projected/adb6bffd-3a41-480b-85df-1f3489ce7007-kube-api-access-c8lfb\") pod \"community-operators-s8hx9\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.176912 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.177392 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.677376614 +0000 UTC m=+158.375951514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.242239 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.282636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.283059 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.783041809 +0000 UTC m=+158.481616719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.392126 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.392624 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.892598998 +0000 UTC m=+158.591173908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.422924 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:08 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:08 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:08 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.423019 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.495686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.496578 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:08.996551327 +0000 UTC m=+158.695126287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.596994 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.597667 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.097641509 +0000 UTC m=+158.796216419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.608977 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndd6w"] Jan 26 11:20:08 crc kubenswrapper[4867]: W0126 11:20:08.674802 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19cef52a_3ef4_4b1e_a52e_0ba6e01e49b5.slice/crio-41279b884ccc25176c3e4a7b6e499441ce118c2ec9026ee4fb49fb5ef170c8e3 WatchSource:0}: Error finding container 41279b884ccc25176c3e4a7b6e499441ce118c2ec9026ee4fb49fb5ef170c8e3: Status 404 returned error can't find the container with id 41279b884ccc25176c3e4a7b6e499441ce118c2ec9026ee4fb49fb5ef170c8e3 Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.700460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.700820 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.200808517 +0000 UTC m=+158.899383427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.784744 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.784801 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.802937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.803535 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.303512363 +0000 UTC m=+159.002087273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.803661 4867 patch_prober.go:28] interesting pod/console-f9d7485db-dc94j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.803705 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dc94j" podUID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.854474 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.861788 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerStarted","Data":"41279b884ccc25176c3e4a7b6e499441ce118c2ec9026ee4fb49fb5ef170c8e3"} Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.871502 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.891735 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.893413 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.897304 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4lhrp"] Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.902648 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.902839 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.904935 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:08 crc kubenswrapper[4867]: E0126 11:20:08.907163 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.407139923 +0000 UTC m=+159.105714833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.929808 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.960015 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:20:08 crc kubenswrapper[4867]: I0126 11:20:08.960176 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.009716 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.010634 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.010922 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb6dd76-e4ab-483b-9848-c0892427e67b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.011034 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb6dd76-e4ab-483b-9848-c0892427e67b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.011138 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.511122472 +0000 UTC m=+159.209697382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.012285 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcljn"] Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.106034 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s8hx9"] Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.122972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb6dd76-e4ab-483b-9848-c0892427e67b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.123099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.123102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb6dd76-e4ab-483b-9848-c0892427e67b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.123135 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb6dd76-e4ab-483b-9848-c0892427e67b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.123540 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.623524118 +0000 UTC m=+159.322099028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.152321 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb6dd76-e4ab-483b-9848-c0892427e67b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.201613 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nstb5"] Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.202773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.206412 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.229125 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.229569 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.729545142 +0000 UTC m=+159.428120052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.238848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.238928 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.294171 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nstb5"] Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.329954 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.330810 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6w66\" (UniqueName: \"kubernetes.io/projected/bf513a52-cfc2-49df-be04-4976f7399901-kube-api-access-t6w66\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.330973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.331090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-utilities\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.331255 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-catalog-content\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.331757 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.831738134 +0000 UTC m=+159.530313044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.391787 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.395897 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:09 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:09 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:09 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.395992 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.433398 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.433670 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.933626128 +0000 UTC m=+159.632201038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.433740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-utilities\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.433956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-catalog-content\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.434033 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6w66\" (UniqueName: \"kubernetes.io/projected/bf513a52-cfc2-49df-be04-4976f7399901-kube-api-access-t6w66\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.434445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-utilities\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.434836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-catalog-content\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.435492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.436170 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:09.936151064 +0000 UTC m=+159.634725974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.457305 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6w66\" (UniqueName: \"kubernetes.io/projected/bf513a52-cfc2-49df-be04-4976f7399901-kube-api-access-t6w66\") pod \"redhat-marketplace-nstb5\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.536994 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.537583 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.037561816 +0000 UTC m=+159.736136726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.544964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.552092 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltvwb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.552138 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltvwb" podUID="6a27bc25-3df1-4dd2-a51d-de8e2bb5070e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.552345 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltvwb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.552454 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ltvwb" podUID="6a27bc25-3df1-4dd2-a51d-de8e2bb5070e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.579254 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-84kcf"] Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.580582 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.599843 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84kcf"] Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.638438 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-catalog-content\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.638727 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7w4q\" (UniqueName: \"kubernetes.io/projected/7428579f-3d9c-4910-9e5c-b6694944afce-kube-api-access-z7w4q\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.638787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.638825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-utilities\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.639173 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.139158741 +0000 UTC m=+159.837733651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.739951 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.740186 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.240156241 +0000 UTC m=+159.938731151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.740370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-catalog-content\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.740415 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7w4q\" (UniqueName: \"kubernetes.io/projected/7428579f-3d9c-4910-9e5c-b6694944afce-kube-api-access-z7w4q\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.740496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.740548 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-utilities\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.741786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-utilities\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.741873 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-catalog-content\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.742132 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.242103933 +0000 UTC m=+159.940679053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.771468 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7w4q\" (UniqueName: \"kubernetes.io/projected/7428579f-3d9c-4910-9e5c-b6694944afce-kube-api-access-z7w4q\") pod \"redhat-marketplace-84kcf\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.774799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.832586 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.844359 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.844857 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.344831139 +0000 UTC m=+160.043406049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.845159 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.846743 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.34673169 +0000 UTC m=+160.045306600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.883013 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cb6dd76-e4ab-483b-9848-c0892427e67b","Type":"ContainerStarted","Data":"1472a271b6bf8277c7ddd9411a7b7beac8ace343673e5f45d67cb7fd3aac5c58"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.884407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcljn" event={"ID":"aeb3191f-7e7a-4d94-b913-4f78b379f3e9","Type":"ContainerStarted","Data":"c26e8acc208f5f08699660a2eaeffc6dca23fcbfab229c6ab366faef4c30634d"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.886487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8hx9" event={"ID":"adb6bffd-3a41-480b-85df-1f3489ce7007","Type":"ContainerStarted","Data":"a57cbca14f24cd948af68c4e2c280fef8ea173804ae8e6a7b5581cf99bb2b93c"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.886527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8hx9" event={"ID":"adb6bffd-3a41-480b-85df-1f3489ce7007","Type":"ContainerStarted","Data":"293e7ec5fdc21304ae837bf6d1dd6c8b144c60c354ab3e71aec649e159b75162"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.890353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" event={"ID":"f7d11034-ad81-48b6-bf3b-8597910b1adf","Type":"ContainerStarted","Data":"3773bcd55df50fb3e6002cdbf0803409923c187d132d3052758ac8fe2e0f3604"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.925525 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerStarted","Data":"5f4cf4c375a217e18b534e7dee1f20ae60a18095dbc5d513b2231f46b0bc301d"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.931059 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lhrp" event={"ID":"8e8b11fb-b146-4307-b94e-515815b10c58","Type":"ContainerStarted","Data":"3053d35b465551fd8b97b8321f0c6c33484bf381e37f2bf1234820e4df57737f"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.931124 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lhrp" event={"ID":"8e8b11fb-b146-4307-b94e-515815b10c58","Type":"ContainerStarted","Data":"f9c0abfc77ee202859b5e42e1e2159db15a1076e7a3e59e2f3362ef707ff808e"} Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.940301 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nxdt6" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.940630 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.953453 4867 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.960909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.961153 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.461102579 +0000 UTC m=+160.159677489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:09 crc kubenswrapper[4867]: I0126 11:20:09.961437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:09 crc kubenswrapper[4867]: E0126 11:20:09.961913 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.46189716 +0000 UTC m=+160.160472070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.055154 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nstb5"] Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.062914 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.063908 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.563848825 +0000 UTC m=+160.262423735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: W0126 11:20:10.080488 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf513a52_cfc2_49df_be04_4976f7399901.slice/crio-b99c7215fc87ffd1b42d1f5c99696a4a85556ab16606e2d4b325743d58f3f170 WatchSource:0}: Error finding container b99c7215fc87ffd1b42d1f5c99696a4a85556ab16606e2d4b325743d58f3f170: Status 404 returned error can't find the container with id b99c7215fc87ffd1b42d1f5c99696a4a85556ab16606e2d4b325743d58f3f170 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.165142 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.165764 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.665742819 +0000 UTC m=+160.364317729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.266965 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.267593 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.76756603 +0000 UTC m=+160.466140940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.369975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.370466 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.870446611 +0000 UTC m=+160.569021521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.385737 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-84kcf"] Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.405367 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:10 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:10 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:10 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.405598 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.471558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.471929 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.971910132 +0000 UTC m=+160.670485042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.472026 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.472327 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:10.972320433 +0000 UTC m=+160.670895343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.524291 4867 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jvs97 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]log ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]etcd ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/max-in-flight-filter ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 11:20:10 crc kubenswrapper[4867]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 11:20:10 crc kubenswrapper[4867]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-startinformers ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 11:20:10 crc kubenswrapper[4867]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 11:20:10 crc kubenswrapper[4867]: livez check failed Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.525010 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" podUID="6203c5b2-2d8f-46c5-a31c-59190d111d7d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.572703 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.572866 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:11.07284311 +0000 UTC m=+160.771418030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.573346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.573711 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:11.073697643 +0000 UTC m=+160.772272553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.674340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.674776 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 11:20:11.174749425 +0000 UTC m=+160.873324335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.774189 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6sjf"] Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.775429 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.776042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:10 crc kubenswrapper[4867]: E0126 11:20:10.776378 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 11:20:11.276362211 +0000 UTC m=+160.974937121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-skdxp" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.777095 4867 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T11:20:09.953478424Z","Handler":null,"Name":""} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.779192 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.783775 4867 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.783803 4867 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.791948 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6sjf"] Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.877480 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.877773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-utilities\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.877849 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt67p\" (UniqueName: \"kubernetes.io/projected/331bacc3-9595-492a-9e20-ef8007ccc10a-kube-api-access-tt67p\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.877883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-catalog-content\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.882210 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.941138 4867 generic.go:334] "Generic (PLEG): container finished" podID="64c6d7e3-5fb6-4242-b616-2628ca519c8e" containerID="172d104e20afa961a17964f863eab3270fbfc0738a332d7f23c11d134a0ffdbe" exitCode=0 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.941233 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" event={"ID":"64c6d7e3-5fb6-4242-b616-2628ca519c8e","Type":"ContainerDied","Data":"172d104e20afa961a17964f863eab3270fbfc0738a332d7f23c11d134a0ffdbe"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.944049 4867 generic.go:334] "Generic (PLEG): container finished" podID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerID="a57cbca14f24cd948af68c4e2c280fef8ea173804ae8e6a7b5581cf99bb2b93c" exitCode=0 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.944161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8hx9" event={"ID":"adb6bffd-3a41-480b-85df-1f3489ce7007","Type":"ContainerDied","Data":"a57cbca14f24cd948af68c4e2c280fef8ea173804ae8e6a7b5581cf99bb2b93c"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.946027 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.947150 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" event={"ID":"f7d11034-ad81-48b6-bf3b-8597910b1adf","Type":"ContainerStarted","Data":"386dfb1584b6b212c54ac846adddbad0b18afce7166b405852d11cf37d1179dd"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.951355 4867 generic.go:334] "Generic (PLEG): container finished" podID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerID="5f4cf4c375a217e18b534e7dee1f20ae60a18095dbc5d513b2231f46b0bc301d" exitCode=0 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.951630 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerDied","Data":"5f4cf4c375a217e18b534e7dee1f20ae60a18095dbc5d513b2231f46b0bc301d"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.953657 4867 generic.go:334] "Generic (PLEG): container finished" podID="8e8b11fb-b146-4307-b94e-515815b10c58" containerID="3053d35b465551fd8b97b8321f0c6c33484bf381e37f2bf1234820e4df57737f" exitCode=0 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.953712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lhrp" event={"ID":"8e8b11fb-b146-4307-b94e-515815b10c58","Type":"ContainerDied","Data":"3053d35b465551fd8b97b8321f0c6c33484bf381e37f2bf1234820e4df57737f"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.961530 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84kcf" event={"ID":"7428579f-3d9c-4910-9e5c-b6694944afce","Type":"ContainerStarted","Data":"96160a369f46b18d6c27d8cd8354d694d8bf652ceb434aaf1263e83afa381b2b"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.961569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84kcf" event={"ID":"7428579f-3d9c-4910-9e5c-b6694944afce","Type":"ContainerStarted","Data":"07d9ba90be0306d5dd4c2f213ceaaa2d0a5cfe8d41f0bf6fe4c620792a9a1a81"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.966567 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf513a52-cfc2-49df-be04-4976f7399901" containerID="3715b8b883e07b4da58c22940d64775022219675ef2406542d6e8bdb2b5ad624" exitCode=0 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.966650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nstb5" event={"ID":"bf513a52-cfc2-49df-be04-4976f7399901","Type":"ContainerDied","Data":"3715b8b883e07b4da58c22940d64775022219675ef2406542d6e8bdb2b5ad624"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.966684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nstb5" event={"ID":"bf513a52-cfc2-49df-be04-4976f7399901","Type":"ContainerStarted","Data":"b99c7215fc87ffd1b42d1f5c99696a4a85556ab16606e2d4b325743d58f3f170"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.969198 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cb6dd76-e4ab-483b-9848-c0892427e67b","Type":"ContainerStarted","Data":"cef49f4b4226d5ffbf40dc6d754173011d1044c521bea76273ca98ac63f7087a"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.973517 4867 generic.go:334] "Generic (PLEG): container finished" podID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerID="befd8d762f53551f5b5e4e33da373b7bf64718acb2fd2b021ea7321d878b11ee" exitCode=0 Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.973816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcljn" event={"ID":"aeb3191f-7e7a-4d94-b913-4f78b379f3e9","Type":"ContainerDied","Data":"befd8d762f53551f5b5e4e33da373b7bf64718acb2fd2b021ea7321d878b11ee"} Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.987026 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-utilities\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.995037 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt67p\" (UniqueName: \"kubernetes.io/projected/331bacc3-9595-492a-9e20-ef8007ccc10a-kube-api-access-tt67p\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.995188 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-utilities\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.995302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-catalog-content\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.995399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:10 crc kubenswrapper[4867]: I0126 11:20:10.996303 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-catalog-content\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.000812 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zvpfm" podStartSLOduration=15.000375241 podStartE2EDuration="15.000375241s" podCreationTimestamp="2026-01-26 11:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:10.979193632 +0000 UTC m=+160.677768552" watchObservedRunningTime="2026-01-26 11:20:11.000375241 +0000 UTC m=+160.698950171" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.006827 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.006873 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.020571 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt67p\" (UniqueName: \"kubernetes.io/projected/331bacc3-9595-492a-9e20-ef8007ccc10a-kube-api-access-tt67p\") pod \"redhat-operators-h6sjf\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.057301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-skdxp\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.094501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.152832 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.15281247 podStartE2EDuration="3.15281247s" podCreationTimestamp="2026-01-26 11:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:11.152099051 +0000 UTC m=+160.850673961" watchObservedRunningTime="2026-01-26 11:20:11.15281247 +0000 UTC m=+160.851387380" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.177795 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbhb4"] Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.179605 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.193474 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbhb4"] Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.301577 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b92w\" (UniqueName: \"kubernetes.io/projected/4c3ed719-d8a0-4f47-b0f1-9e635825152a-kube-api-access-9b92w\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.301674 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-catalog-content\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.302086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-utilities\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.314910 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.395334 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:11 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:11 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:11 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.395460 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.403068 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b92w\" (UniqueName: \"kubernetes.io/projected/4c3ed719-d8a0-4f47-b0f1-9e635825152a-kube-api-access-9b92w\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.403173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-catalog-content\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.403251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-utilities\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.403789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-catalog-content\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.403813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-utilities\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.424157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b92w\" (UniqueName: \"kubernetes.io/projected/4c3ed719-d8a0-4f47-b0f1-9e635825152a-kube-api-access-9b92w\") pod \"redhat-operators-mbhb4\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.496531 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.758589 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6sjf"] Jan 26 11:20:11 crc kubenswrapper[4867]: W0126 11:20:11.826021 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331bacc3_9595_492a_9e20_ef8007ccc10a.slice/crio-82c0d865102e1d222690a85bdb5ce1ec196b42b62c512ed0dc8f9508052f95fe WatchSource:0}: Error finding container 82c0d865102e1d222690a85bdb5ce1ec196b42b62c512ed0dc8f9508052f95fe: Status 404 returned error can't find the container with id 82c0d865102e1d222690a85bdb5ce1ec196b42b62c512ed0dc8f9508052f95fe Jan 26 11:20:11 crc kubenswrapper[4867]: I0126 11:20:11.977637 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-skdxp"] Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.069425 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6sjf" event={"ID":"331bacc3-9595-492a-9e20-ef8007ccc10a","Type":"ContainerStarted","Data":"82c0d865102e1d222690a85bdb5ce1ec196b42b62c512ed0dc8f9508052f95fe"} Jan 26 11:20:12 crc kubenswrapper[4867]: W0126 11:20:12.073903 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3348ed5_3007_4ff3_b77d_ecb758f238df.slice/crio-c055283b690ddf9009e8d64db314c99237e22f6f60bea0ba4c50fb7d893bffa2 WatchSource:0}: Error finding container c055283b690ddf9009e8d64db314c99237e22f6f60bea0ba4c50fb7d893bffa2: Status 404 returned error can't find the container with id c055283b690ddf9009e8d64db314c99237e22f6f60bea0ba4c50fb7d893bffa2 Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.075110 4867 generic.go:334] "Generic (PLEG): container finished" podID="7428579f-3d9c-4910-9e5c-b6694944afce" containerID="96160a369f46b18d6c27d8cd8354d694d8bf652ceb434aaf1263e83afa381b2b" exitCode=0 Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.076040 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84kcf" event={"ID":"7428579f-3d9c-4910-9e5c-b6694944afce","Type":"ContainerDied","Data":"96160a369f46b18d6c27d8cd8354d694d8bf652ceb434aaf1263e83afa381b2b"} Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.105871 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbhb4"] Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.506280 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:12 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:12 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:12 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.506797 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.631547 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 11:20:12 crc kubenswrapper[4867]: I0126 11:20:12.924270 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.087975 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.088173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn" event={"ID":"64c6d7e3-5fb6-4242-b616-2628ca519c8e","Type":"ContainerDied","Data":"4f3f66556ef8fb33aceb669fe25d280f7c339e4e070a22906ea33e5c940051d9"} Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.089016 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3f66556ef8fb33aceb669fe25d280f7c339e4e070a22906ea33e5c940051d9" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.093995 4867 generic.go:334] "Generic (PLEG): container finished" podID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerID="b97361027d79323abac8b8c10feb5b6683ccf4a76aaa6aada100285864f82dae" exitCode=0 Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.094115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbhb4" event={"ID":"4c3ed719-d8a0-4f47-b0f1-9e635825152a","Type":"ContainerDied","Data":"b97361027d79323abac8b8c10feb5b6683ccf4a76aaa6aada100285864f82dae"} Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.094158 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbhb4" event={"ID":"4c3ed719-d8a0-4f47-b0f1-9e635825152a","Type":"ContainerStarted","Data":"08bb4ab708f036e9f405daf729174e8a2d3766e77d65178196176ba7cd984cdf"} Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.096643 4867 generic.go:334] "Generic (PLEG): container finished" podID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerID="79711ed2b7e82452e1b4dc06984970b24f5b20ab3fdd9da15a31090b0d5f3a2d" exitCode=0 Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.096718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6sjf" event={"ID":"331bacc3-9595-492a-9e20-ef8007ccc10a","Type":"ContainerDied","Data":"79711ed2b7e82452e1b4dc06984970b24f5b20ab3fdd9da15a31090b0d5f3a2d"} Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.109170 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" event={"ID":"b3348ed5-3007-4ff3-b77d-ecb758f238df","Type":"ContainerStarted","Data":"d65fed487f8872774ff9062bdfbd8def8c0c8b8df10dfd3e8160b8df411cdb9b"} Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.109245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" event={"ID":"b3348ed5-3007-4ff3-b77d-ecb758f238df","Type":"ContainerStarted","Data":"c055283b690ddf9009e8d64db314c99237e22f6f60bea0ba4c50fb7d893bffa2"} Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.112241 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.124630 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c6d7e3-5fb6-4242-b616-2628ca519c8e-config-volume\") pod \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.124768 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4bp2\" (UniqueName: \"kubernetes.io/projected/64c6d7e3-5fb6-4242-b616-2628ca519c8e-kube-api-access-p4bp2\") pod \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.124907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c6d7e3-5fb6-4242-b616-2628ca519c8e-secret-volume\") pod \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\" (UID: \"64c6d7e3-5fb6-4242-b616-2628ca519c8e\") " Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.127279 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c6d7e3-5fb6-4242-b616-2628ca519c8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "64c6d7e3-5fb6-4242-b616-2628ca519c8e" (UID: "64c6d7e3-5fb6-4242-b616-2628ca519c8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.136898 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" podStartSLOduration=142.136875049 podStartE2EDuration="2m22.136875049s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:13.135813581 +0000 UTC m=+162.834388491" watchObservedRunningTime="2026-01-26 11:20:13.136875049 +0000 UTC m=+162.835449959" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.138595 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c6d7e3-5fb6-4242-b616-2628ca519c8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "64c6d7e3-5fb6-4242-b616-2628ca519c8e" (UID: "64c6d7e3-5fb6-4242-b616-2628ca519c8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.161686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c6d7e3-5fb6-4242-b616-2628ca519c8e-kube-api-access-p4bp2" (OuterVolumeSpecName: "kube-api-access-p4bp2") pod "64c6d7e3-5fb6-4242-b616-2628ca519c8e" (UID: "64c6d7e3-5fb6-4242-b616-2628ca519c8e"). InnerVolumeSpecName "kube-api-access-p4bp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.226770 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4bp2\" (UniqueName: \"kubernetes.io/projected/64c6d7e3-5fb6-4242-b616-2628ca519c8e-kube-api-access-p4bp2\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.226864 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64c6d7e3-5fb6-4242-b616-2628ca519c8e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.226880 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64c6d7e3-5fb6-4242-b616-2628ca519c8e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.393658 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:13 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:13 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:13 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:13 crc kubenswrapper[4867]: I0126 11:20:13.393753 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.138866 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cb6dd76-e4ab-483b-9848-c0892427e67b" containerID="cef49f4b4226d5ffbf40dc6d754173011d1044c521bea76273ca98ac63f7087a" exitCode=0 Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.139280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cb6dd76-e4ab-483b-9848-c0892427e67b","Type":"ContainerDied","Data":"cef49f4b4226d5ffbf40dc6d754173011d1044c521bea76273ca98ac63f7087a"} Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.244747 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.249895 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jvs97" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.403359 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:14 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:14 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:14 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.403443 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.630917 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 11:20:14 crc kubenswrapper[4867]: E0126 11:20:14.631145 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c6d7e3-5fb6-4242-b616-2628ca519c8e" containerName="collect-profiles" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.631160 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c6d7e3-5fb6-4242-b616-2628ca519c8e" containerName="collect-profiles" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.631607 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c6d7e3-5fb6-4242-b616-2628ca519c8e" containerName="collect-profiles" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.632101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.635586 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.639896 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.646382 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.751779 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2883ae14-f693-4d0d-b58e-43672b2cbb11-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.752033 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2883ae14-f693-4d0d-b58e-43672b2cbb11-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.824332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8rqgh" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.857922 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2883ae14-f693-4d0d-b58e-43672b2cbb11-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.858025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.858046 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2883ae14-f693-4d0d-b58e-43672b2cbb11-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.858415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2883ae14-f693-4d0d-b58e-43672b2cbb11-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.867524 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed024510-edc6-4306-b54b-63facba64419-metrics-certs\") pod \"network-metrics-daemon-nmdmx\" (UID: \"ed024510-edc6-4306-b54b-63facba64419\") " pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.881489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2883ae14-f693-4d0d-b58e-43672b2cbb11-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.966427 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:14 crc kubenswrapper[4867]: I0126 11:20:14.980792 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nmdmx" Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.393494 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:15 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:15 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:15 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.393997 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.486047 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.833289 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.837412 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nmdmx"] Jan 26 11:20:15 crc kubenswrapper[4867]: W0126 11:20:15.981248 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded024510_edc6_4306_b54b_63facba64419.slice/crio-24a914a39522e292c0be59c4632b4b26ca99a01eeb0f76acec888a996297b29a WatchSource:0}: Error finding container 24a914a39522e292c0be59c4632b4b26ca99a01eeb0f76acec888a996297b29a: Status 404 returned error can't find the container with id 24a914a39522e292c0be59c4632b4b26ca99a01eeb0f76acec888a996297b29a Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.982802 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb6dd76-e4ab-483b-9848-c0892427e67b-kubelet-dir\") pod \"4cb6dd76-e4ab-483b-9848-c0892427e67b\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.982861 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb6dd76-e4ab-483b-9848-c0892427e67b-kube-api-access\") pod \"4cb6dd76-e4ab-483b-9848-c0892427e67b\" (UID: \"4cb6dd76-e4ab-483b-9848-c0892427e67b\") " Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.986587 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cb6dd76-e4ab-483b-9848-c0892427e67b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cb6dd76-e4ab-483b-9848-c0892427e67b" (UID: "4cb6dd76-e4ab-483b-9848-c0892427e67b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:20:15 crc kubenswrapper[4867]: I0126 11:20:15.994033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb6dd76-e4ab-483b-9848-c0892427e67b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cb6dd76-e4ab-483b-9848-c0892427e67b" (UID: "4cb6dd76-e4ab-483b-9848-c0892427e67b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.084627 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb6dd76-e4ab-483b-9848-c0892427e67b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.084997 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb6dd76-e4ab-483b-9848-c0892427e67b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.176295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" event={"ID":"ed024510-edc6-4306-b54b-63facba64419","Type":"ContainerStarted","Data":"24a914a39522e292c0be59c4632b4b26ca99a01eeb0f76acec888a996297b29a"} Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.178997 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2883ae14-f693-4d0d-b58e-43672b2cbb11","Type":"ContainerStarted","Data":"fe4efdcff01efb76f4a78173384dd69abb81991161d899932a27f2bc6c962778"} Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.182090 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cb6dd76-e4ab-483b-9848-c0892427e67b","Type":"ContainerDied","Data":"1472a271b6bf8277c7ddd9411a7b7beac8ace343673e5f45d67cb7fd3aac5c58"} Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.182128 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1472a271b6bf8277c7ddd9411a7b7beac8ace343673e5f45d67cb7fd3aac5c58" Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.182195 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.403505 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:16 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:16 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:16 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:16 crc kubenswrapper[4867]: I0126 11:20:16.403558 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:17 crc kubenswrapper[4867]: I0126 11:20:17.393424 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:17 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:17 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:17 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:17 crc kubenswrapper[4867]: I0126 11:20:17.393524 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:18 crc kubenswrapper[4867]: I0126 11:20:18.238532 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2883ae14-f693-4d0d-b58e-43672b2cbb11","Type":"ContainerStarted","Data":"077f9f3a0c2d885d4d32caccd3c0b7eee859cfef211f1549a27c4fadc32a7b07"} Jan 26 11:20:18 crc kubenswrapper[4867]: I0126 11:20:18.394245 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:18 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:18 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:18 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:18 crc kubenswrapper[4867]: I0126 11:20:18.394303 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:18 crc kubenswrapper[4867]: I0126 11:20:18.797963 4867 patch_prober.go:28] interesting pod/console-f9d7485db-dc94j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 11:20:18 crc kubenswrapper[4867]: I0126 11:20:18.798044 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dc94j" podUID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.258000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" event={"ID":"ed024510-edc6-4306-b54b-63facba64419","Type":"ContainerStarted","Data":"fe72187ce4de49b1176ada7431dfec462ec3182df18dcd6c754e1c59740b39a4"} Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.284698 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.284678925 podStartE2EDuration="5.284678925s" podCreationTimestamp="2026-01-26 11:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:19.282199798 +0000 UTC m=+168.980774708" watchObservedRunningTime="2026-01-26 11:20:19.284678925 +0000 UTC m=+168.983253835" Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.394794 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:19 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:19 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:19 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.394870 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.552647 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltvwb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.552731 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ltvwb" podUID="6a27bc25-3df1-4dd2-a51d-de8e2bb5070e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.554514 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltvwb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 26 11:20:19 crc kubenswrapper[4867]: I0126 11:20:19.554590 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltvwb" podUID="6a27bc25-3df1-4dd2-a51d-de8e2bb5070e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 26 11:20:20 crc kubenswrapper[4867]: I0126 11:20:20.267467 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nmdmx" event={"ID":"ed024510-edc6-4306-b54b-63facba64419","Type":"ContainerStarted","Data":"0327fd21fd5af121315624556db0853360806710a69bcfbf3bd4693a112195d8"} Jan 26 11:20:20 crc kubenswrapper[4867]: I0126 11:20:20.271920 4867 generic.go:334] "Generic (PLEG): container finished" podID="2883ae14-f693-4d0d-b58e-43672b2cbb11" containerID="077f9f3a0c2d885d4d32caccd3c0b7eee859cfef211f1549a27c4fadc32a7b07" exitCode=0 Jan 26 11:20:20 crc kubenswrapper[4867]: I0126 11:20:20.271962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2883ae14-f693-4d0d-b58e-43672b2cbb11","Type":"ContainerDied","Data":"077f9f3a0c2d885d4d32caccd3c0b7eee859cfef211f1549a27c4fadc32a7b07"} Jan 26 11:20:20 crc kubenswrapper[4867]: I0126 11:20:20.287489 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nmdmx" podStartSLOduration=149.287467618 podStartE2EDuration="2m29.287467618s" podCreationTimestamp="2026-01-26 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:20:20.282996418 +0000 UTC m=+169.981571348" watchObservedRunningTime="2026-01-26 11:20:20.287467618 +0000 UTC m=+169.986042518" Jan 26 11:20:20 crc kubenswrapper[4867]: I0126 11:20:20.399684 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:20 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:20 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:20 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:20 crc kubenswrapper[4867]: I0126 11:20:20.399941 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:21 crc kubenswrapper[4867]: I0126 11:20:21.440906 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:21 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:21 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:21 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:21 crc kubenswrapper[4867]: I0126 11:20:21.441025 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:22 crc kubenswrapper[4867]: I0126 11:20:22.397014 4867 patch_prober.go:28] interesting pod/router-default-5444994796-dmt7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 11:20:22 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Jan 26 11:20:22 crc kubenswrapper[4867]: [+]process-running ok Jan 26 11:20:22 crc kubenswrapper[4867]: healthz check failed Jan 26 11:20:22 crc kubenswrapper[4867]: I0126 11:20:22.397732 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dmt7q" podUID="ea07ea8f-1510-4609-949b-83a3aed3ddee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 11:20:23 crc kubenswrapper[4867]: I0126 11:20:23.409870 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:20:23 crc kubenswrapper[4867]: I0126 11:20:23.412948 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dmt7q" Jan 26 11:20:28 crc kubenswrapper[4867]: I0126 11:20:28.784491 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:20:28 crc kubenswrapper[4867]: I0126 11:20:28.796371 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:20:29 crc kubenswrapper[4867]: I0126 11:20:29.568570 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ltvwb" Jan 26 11:20:31 crc kubenswrapper[4867]: I0126 11:20:31.321382 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.057492 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.207082 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2883ae14-f693-4d0d-b58e-43672b2cbb11-kube-api-access\") pod \"2883ae14-f693-4d0d-b58e-43672b2cbb11\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.207645 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2883ae14-f693-4d0d-b58e-43672b2cbb11-kubelet-dir\") pod \"2883ae14-f693-4d0d-b58e-43672b2cbb11\" (UID: \"2883ae14-f693-4d0d-b58e-43672b2cbb11\") " Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.207728 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2883ae14-f693-4d0d-b58e-43672b2cbb11-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2883ae14-f693-4d0d-b58e-43672b2cbb11" (UID: "2883ae14-f693-4d0d-b58e-43672b2cbb11"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.207962 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2883ae14-f693-4d0d-b58e-43672b2cbb11-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.215502 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2883ae14-f693-4d0d-b58e-43672b2cbb11-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2883ae14-f693-4d0d-b58e-43672b2cbb11" (UID: "2883ae14-f693-4d0d-b58e-43672b2cbb11"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.309529 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2883ae14-f693-4d0d-b58e-43672b2cbb11-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.401474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2883ae14-f693-4d0d-b58e-43672b2cbb11","Type":"ContainerDied","Data":"fe4efdcff01efb76f4a78173384dd69abb81991161d899932a27f2bc6c962778"} Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.401537 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4efdcff01efb76f4a78173384dd69abb81991161d899932a27f2bc6c962778" Jan 26 11:20:32 crc kubenswrapper[4867]: I0126 11:20:32.401539 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 11:20:36 crc kubenswrapper[4867]: I0126 11:20:36.293990 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:20:36 crc kubenswrapper[4867]: I0126 11:20:36.294390 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:20:38 crc kubenswrapper[4867]: I0126 11:20:38.614984 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 11:20:39 crc kubenswrapper[4867]: I0126 11:20:39.786286 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bs62h" Jan 26 11:20:40 crc kubenswrapper[4867]: I0126 11:20:40.194546 4867 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-bmqm4 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 11:20:40 crc kubenswrapper[4867]: I0126 11:20:40.194634 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmqm4" podUID="50f0dbec-6ed9-47e1-8b7a-4e4a2e1475b4" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.174448 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 11:20:51 crc kubenswrapper[4867]: E0126 11:20:51.175636 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb6dd76-e4ab-483b-9848-c0892427e67b" containerName="pruner" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.175660 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb6dd76-e4ab-483b-9848-c0892427e67b" containerName="pruner" Jan 26 11:20:51 crc kubenswrapper[4867]: E0126 11:20:51.175694 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2883ae14-f693-4d0d-b58e-43672b2cbb11" containerName="pruner" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.175707 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2883ae14-f693-4d0d-b58e-43672b2cbb11" containerName="pruner" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.175898 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb6dd76-e4ab-483b-9848-c0892427e67b" containerName="pruner" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.175923 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2883ae14-f693-4d0d-b58e-43672b2cbb11" containerName="pruner" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.184464 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.184460 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.188605 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.189947 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.285380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb78768-cf6d-4e99-9679-723a806f5952-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.286081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb78768-cf6d-4e99-9679-723a806f5952-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.388470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb78768-cf6d-4e99-9679-723a806f5952-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.388587 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb78768-cf6d-4e99-9679-723a806f5952-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.388628 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb78768-cf6d-4e99-9679-723a806f5952-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.411766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb78768-cf6d-4e99-9679-723a806f5952-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:51 crc kubenswrapper[4867]: I0126 11:20:51.520764 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.561518 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.562908 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.574999 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.658575 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.658659 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.658726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-var-lock\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.760603 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.760703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-var-lock\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.760748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.760815 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.760850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-var-lock\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.781113 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kube-api-access\") pod \"installer-9-crc\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:56 crc kubenswrapper[4867]: I0126 11:20:56.940730 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:20:59 crc kubenswrapper[4867]: E0126 11:20:59.076795 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 11:20:59 crc kubenswrapper[4867]: E0126 11:20:59.077084 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6w66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nstb5_openshift-marketplace(bf513a52-cfc2-49df-be04-4976f7399901): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:20:59 crc kubenswrapper[4867]: E0126 11:20:59.078473 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nstb5" podUID="bf513a52-cfc2-49df-be04-4976f7399901" Jan 26 11:20:59 crc kubenswrapper[4867]: E0126 11:20:59.672256 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 11:20:59 crc kubenswrapper[4867]: E0126 11:20:59.672479 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7w4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-84kcf_openshift-marketplace(7428579f-3d9c-4910-9e5c-b6694944afce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:20:59 crc kubenswrapper[4867]: E0126 11:20:59.674147 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-84kcf" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" Jan 26 11:21:01 crc kubenswrapper[4867]: E0126 11:21:01.491123 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-84kcf" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" Jan 26 11:21:01 crc kubenswrapper[4867]: E0126 11:21:01.491135 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nstb5" podUID="bf513a52-cfc2-49df-be04-4976f7399901" Jan 26 11:21:01 crc kubenswrapper[4867]: E0126 11:21:01.556045 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 11:21:01 crc kubenswrapper[4867]: E0126 11:21:01.556332 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8lfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s8hx9_openshift-marketplace(adb6bffd-3a41-480b-85df-1f3489ce7007): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:21:01 crc kubenswrapper[4867]: E0126 11:21:01.557584 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s8hx9" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" Jan 26 11:21:02 crc kubenswrapper[4867]: E0126 11:21:02.970773 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s8hx9" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" Jan 26 11:21:03 crc kubenswrapper[4867]: E0126 11:21:03.051764 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 11:21:03 crc kubenswrapper[4867]: E0126 11:21:03.051986 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjghk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4lhrp_openshift-marketplace(8e8b11fb-b146-4307-b94e-515815b10c58): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:21:03 crc kubenswrapper[4867]: E0126 11:21:03.053207 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4lhrp" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" Jan 26 11:21:03 crc kubenswrapper[4867]: E0126 11:21:03.136536 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 11:21:03 crc kubenswrapper[4867]: E0126 11:21:03.136786 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4699,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gcljn_openshift-marketplace(aeb3191f-7e7a-4d94-b913-4f78b379f3e9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:21:03 crc kubenswrapper[4867]: E0126 11:21:03.138197 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gcljn" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.307625 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.309006 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.309091 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.309934 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.310109 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105" gracePeriod=600 Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.639979 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105" exitCode=0 Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.640089 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105"} Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.648380 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gcljn" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.648421 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4lhrp" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.741551 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.741824 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9b92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mbhb4_openshift-marketplace(4c3ed719-d8a0-4f47-b0f1-9e635825152a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.744360 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mbhb4" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.749350 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.749558 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ktvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ndd6w_openshift-marketplace(19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.750874 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ndd6w" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.790001 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.791050 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt67p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-h6sjf_openshift-marketplace(331bacc3-9595-492a-9e20-ef8007ccc10a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 11:21:06 crc kubenswrapper[4867]: E0126 11:21:06.792265 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-h6sjf" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" Jan 26 11:21:06 crc kubenswrapper[4867]: I0126 11:21:06.969899 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 11:21:07 crc kubenswrapper[4867]: I0126 11:21:07.123136 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 11:21:07 crc kubenswrapper[4867]: I0126 11:21:07.652425 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac","Type":"ContainerStarted","Data":"446f351a8e236a33f0a1a9804911377dbb9a56e6b7164a054f7bf5623b0d1c51"} Jan 26 11:21:07 crc kubenswrapper[4867]: I0126 11:21:07.658094 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"f6f4649a5a6ff9f987b90727334fbb91d637d6ff3f79120bcd4b01a76eef1fb9"} Jan 26 11:21:07 crc kubenswrapper[4867]: I0126 11:21:07.660855 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dfb78768-cf6d-4e99-9679-723a806f5952","Type":"ContainerStarted","Data":"97a87b02d8eb3f58f3e7303194517c2b81bee79d8c3aad7e8c96bcebcc9d013f"} Jan 26 11:21:07 crc kubenswrapper[4867]: I0126 11:21:07.660928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dfb78768-cf6d-4e99-9679-723a806f5952","Type":"ContainerStarted","Data":"97b07c8643319f55dc4f1a45f0d45dd3f6b8bb8d8281650c8d7d51d2ee3b0b04"} Jan 26 11:21:07 crc kubenswrapper[4867]: E0126 11:21:07.664095 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ndd6w" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" Jan 26 11:21:07 crc kubenswrapper[4867]: E0126 11:21:07.664153 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-h6sjf" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" Jan 26 11:21:07 crc kubenswrapper[4867]: E0126 11:21:07.664281 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mbhb4" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" Jan 26 11:21:08 crc kubenswrapper[4867]: I0126 11:21:08.668011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac","Type":"ContainerStarted","Data":"a3a2f21392ea88de6133444f0fcded2ca9c9f38aa4e2a234d938cf08b053ee23"} Jan 26 11:21:08 crc kubenswrapper[4867]: I0126 11:21:08.670442 4867 generic.go:334] "Generic (PLEG): container finished" podID="dfb78768-cf6d-4e99-9679-723a806f5952" containerID="97a87b02d8eb3f58f3e7303194517c2b81bee79d8c3aad7e8c96bcebcc9d013f" exitCode=0 Jan 26 11:21:08 crc kubenswrapper[4867]: I0126 11:21:08.670533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dfb78768-cf6d-4e99-9679-723a806f5952","Type":"ContainerDied","Data":"97a87b02d8eb3f58f3e7303194517c2b81bee79d8c3aad7e8c96bcebcc9d013f"} Jan 26 11:21:08 crc kubenswrapper[4867]: I0126 11:21:08.687931 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.687901325 podStartE2EDuration="12.687901325s" podCreationTimestamp="2026-01-26 11:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:21:08.685682755 +0000 UTC m=+218.384257665" watchObservedRunningTime="2026-01-26 11:21:08.687901325 +0000 UTC m=+218.386476235" Jan 26 11:21:09 crc kubenswrapper[4867]: I0126 11:21:09.948498 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.120430 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb78768-cf6d-4e99-9679-723a806f5952-kube-api-access\") pod \"dfb78768-cf6d-4e99-9679-723a806f5952\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.120487 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb78768-cf6d-4e99-9679-723a806f5952-kubelet-dir\") pod \"dfb78768-cf6d-4e99-9679-723a806f5952\" (UID: \"dfb78768-cf6d-4e99-9679-723a806f5952\") " Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.120826 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb78768-cf6d-4e99-9679-723a806f5952-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dfb78768-cf6d-4e99-9679-723a806f5952" (UID: "dfb78768-cf6d-4e99-9679-723a806f5952"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.126611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb78768-cf6d-4e99-9679-723a806f5952-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dfb78768-cf6d-4e99-9679-723a806f5952" (UID: "dfb78768-cf6d-4e99-9679-723a806f5952"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.222200 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb78768-cf6d-4e99-9679-723a806f5952-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.222253 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb78768-cf6d-4e99-9679-723a806f5952-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.684499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dfb78768-cf6d-4e99-9679-723a806f5952","Type":"ContainerDied","Data":"97b07c8643319f55dc4f1a45f0d45dd3f6b8bb8d8281650c8d7d51d2ee3b0b04"} Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.684568 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b07c8643319f55dc4f1a45f0d45dd3f6b8bb8d8281650c8d7d51d2ee3b0b04" Jan 26 11:21:10 crc kubenswrapper[4867]: I0126 11:21:10.684586 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 11:21:18 crc kubenswrapper[4867]: I0126 11:21:18.747331 4867 generic.go:334] "Generic (PLEG): container finished" podID="7428579f-3d9c-4910-9e5c-b6694944afce" containerID="a44eb3a42e2ee237a25ff65a4dc81c473ecb90d00c7f6113387c81bb7bc3508e" exitCode=0 Jan 26 11:21:18 crc kubenswrapper[4867]: I0126 11:21:18.747472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84kcf" event={"ID":"7428579f-3d9c-4910-9e5c-b6694944afce","Type":"ContainerDied","Data":"a44eb3a42e2ee237a25ff65a4dc81c473ecb90d00c7f6113387c81bb7bc3508e"} Jan 26 11:21:18 crc kubenswrapper[4867]: I0126 11:21:18.750357 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf513a52-cfc2-49df-be04-4976f7399901" containerID="90eb899c89f78bb7c50dd42a0e756ee645c8199eef51014683883117569bb516" exitCode=0 Jan 26 11:21:18 crc kubenswrapper[4867]: I0126 11:21:18.750445 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nstb5" event={"ID":"bf513a52-cfc2-49df-be04-4976f7399901","Type":"ContainerDied","Data":"90eb899c89f78bb7c50dd42a0e756ee645c8199eef51014683883117569bb516"} Jan 26 11:21:18 crc kubenswrapper[4867]: I0126 11:21:18.753962 4867 generic.go:334] "Generic (PLEG): container finished" podID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerID="d47b1bea49215ec40fa8ef7c6742c36a27120ae7cf53751c9b0853f4958d2d87" exitCode=0 Jan 26 11:21:18 crc kubenswrapper[4867]: I0126 11:21:18.753995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8hx9" event={"ID":"adb6bffd-3a41-480b-85df-1f3489ce7007","Type":"ContainerDied","Data":"d47b1bea49215ec40fa8ef7c6742c36a27120ae7cf53751c9b0853f4958d2d87"} Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.772820 4867 generic.go:334] "Generic (PLEG): container finished" podID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerID="57ccb05b9c7e61863eec14d09ce87af9b7cf78d22d37fad9abeb94770461b9a4" exitCode=0 Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.772937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcljn" event={"ID":"aeb3191f-7e7a-4d94-b913-4f78b379f3e9","Type":"ContainerDied","Data":"57ccb05b9c7e61863eec14d09ce87af9b7cf78d22d37fad9abeb94770461b9a4"} Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.776860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8hx9" event={"ID":"adb6bffd-3a41-480b-85df-1f3489ce7007","Type":"ContainerStarted","Data":"bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e"} Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.780911 4867 generic.go:334] "Generic (PLEG): container finished" podID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerID="6e2177ccc45b2042cf0ed5e2e6509ece78f6670abc94782f5a4a9dbd6ae706de" exitCode=0 Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.780987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbhb4" event={"ID":"4c3ed719-d8a0-4f47-b0f1-9e635825152a","Type":"ContainerDied","Data":"6e2177ccc45b2042cf0ed5e2e6509ece78f6670abc94782f5a4a9dbd6ae706de"} Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.783756 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84kcf" event={"ID":"7428579f-3d9c-4910-9e5c-b6694944afce","Type":"ContainerStarted","Data":"5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a"} Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.786567 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nstb5" event={"ID":"bf513a52-cfc2-49df-be04-4976f7399901","Type":"ContainerStarted","Data":"47e5557aa3f7bb7109b9cc127b56f2340aa6ef931e3058f9cba508804408e85f"} Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.859536 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s8hx9" podStartSLOduration=5.356952094 podStartE2EDuration="1m14.859504857s" podCreationTimestamp="2026-01-26 11:20:07 +0000 UTC" firstStartedPulling="2026-01-26 11:20:10.945807156 +0000 UTC m=+160.644382066" lastFinishedPulling="2026-01-26 11:21:20.448359919 +0000 UTC m=+230.146934829" observedRunningTime="2026-01-26 11:21:21.85811283 +0000 UTC m=+231.556687740" watchObservedRunningTime="2026-01-26 11:21:21.859504857 +0000 UTC m=+231.558079767" Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.894493 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nstb5" podStartSLOduration=3.302098448 podStartE2EDuration="1m12.891054837s" podCreationTimestamp="2026-01-26 11:20:09 +0000 UTC" firstStartedPulling="2026-01-26 11:20:10.967818967 +0000 UTC m=+160.666393877" lastFinishedPulling="2026-01-26 11:21:20.556775366 +0000 UTC m=+230.255350266" observedRunningTime="2026-01-26 11:21:21.884330775 +0000 UTC m=+231.582905685" watchObservedRunningTime="2026-01-26 11:21:21.891054837 +0000 UTC m=+231.589629757" Jan 26 11:21:21 crc kubenswrapper[4867]: I0126 11:21:21.917918 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-84kcf" podStartSLOduration=3.43554191 podStartE2EDuration="1m12.91789038s" podCreationTimestamp="2026-01-26 11:20:09 +0000 UTC" firstStartedPulling="2026-01-26 11:20:10.963999315 +0000 UTC m=+160.662574225" lastFinishedPulling="2026-01-26 11:21:20.446347785 +0000 UTC m=+230.144922695" observedRunningTime="2026-01-26 11:21:21.913006556 +0000 UTC m=+231.611581476" watchObservedRunningTime="2026-01-26 11:21:21.91789038 +0000 UTC m=+231.616465290" Jan 26 11:21:28 crc kubenswrapper[4867]: I0126 11:21:28.257852 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:21:28 crc kubenswrapper[4867]: I0126 11:21:28.258752 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:21:28 crc kubenswrapper[4867]: I0126 11:21:28.418085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:21:28 crc kubenswrapper[4867]: I0126 11:21:28.870775 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:21:29 crc kubenswrapper[4867]: I0126 11:21:29.546250 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:21:29 crc kubenswrapper[4867]: I0126 11:21:29.546305 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:21:29 crc kubenswrapper[4867]: I0126 11:21:29.586305 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:21:29 crc kubenswrapper[4867]: I0126 11:21:29.794951 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s8hx9"] Jan 26 11:21:29 crc kubenswrapper[4867]: I0126 11:21:29.875382 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:21:29 crc kubenswrapper[4867]: I0126 11:21:29.942284 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:21:29 crc kubenswrapper[4867]: I0126 11:21:29.942735 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:21:30 crc kubenswrapper[4867]: I0126 11:21:30.113515 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:21:30 crc kubenswrapper[4867]: I0126 11:21:30.840448 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s8hx9" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="registry-server" containerID="cri-o://bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" gracePeriod=2 Jan 26 11:21:30 crc kubenswrapper[4867]: I0126 11:21:30.897497 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:21:31 crc kubenswrapper[4867]: I0126 11:21:31.995251 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-84kcf"] Jan 26 11:21:32 crc kubenswrapper[4867]: I0126 11:21:32.852596 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-84kcf" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="registry-server" containerID="cri-o://5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" gracePeriod=2 Jan 26 11:21:33 crc kubenswrapper[4867]: I0126 11:21:33.862623 4867 generic.go:334] "Generic (PLEG): container finished" podID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" exitCode=0 Jan 26 11:21:33 crc kubenswrapper[4867]: I0126 11:21:33.862710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8hx9" event={"ID":"adb6bffd-3a41-480b-85df-1f3489ce7007","Type":"ContainerDied","Data":"bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e"} Jan 26 11:21:35 crc kubenswrapper[4867]: I0126 11:21:35.876932 4867 generic.go:334] "Generic (PLEG): container finished" podID="7428579f-3d9c-4910-9e5c-b6694944afce" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" exitCode=0 Jan 26 11:21:35 crc kubenswrapper[4867]: I0126 11:21:35.877024 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84kcf" event={"ID":"7428579f-3d9c-4910-9e5c-b6694944afce","Type":"ContainerDied","Data":"5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a"} Jan 26 11:21:38 crc kubenswrapper[4867]: E0126 11:21:38.243934 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:38 crc kubenswrapper[4867]: E0126 11:21:38.245197 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:38 crc kubenswrapper[4867]: E0126 11:21:38.245536 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:38 crc kubenswrapper[4867]: E0126 11:21:38.245565 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-s8hx9" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="registry-server" Jan 26 11:21:39 crc kubenswrapper[4867]: E0126 11:21:39.942593 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:39 crc kubenswrapper[4867]: E0126 11:21:39.943242 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:39 crc kubenswrapper[4867]: E0126 11:21:39.943756 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:39 crc kubenswrapper[4867]: E0126 11:21:39.943793 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-84kcf" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="registry-server" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.549323 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.549930 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb78768-cf6d-4e99-9679-723a806f5952" containerName="pruner" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.549948 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb78768-cf6d-4e99-9679-723a806f5952" containerName="pruner" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.550073 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb78768-cf6d-4e99-9679-723a806f5952" containerName="pruner" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.550581 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.550880 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec" gracePeriod=15 Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.550973 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b" gracePeriod=15 Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.551013 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81" gracePeriod=15 Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.551044 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2" gracePeriod=15 Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.551101 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292" gracePeriod=15 Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.550988 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.551654 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.552007 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552019 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.552030 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552038 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.552049 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552056 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.552068 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552075 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.552089 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552095 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.552108 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552114 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 11:21:45 crc kubenswrapper[4867]: E0126 11:21:45.552123 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552130 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552271 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552289 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552302 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552311 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552320 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.552540 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596678 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596888 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596920 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.596980 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698734 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698872 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698924 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698994 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.699022 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698988 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.699041 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698949 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.698969 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.699046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.699243 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.699354 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.941940 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.943376 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 11:21:45 crc kubenswrapper[4867]: I0126 11:21:45.944632 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2" exitCode=2 Jan 26 11:21:48 crc kubenswrapper[4867]: E0126 11:21:48.243868 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:48 crc kubenswrapper[4867]: E0126 11:21:48.245074 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:48 crc kubenswrapper[4867]: E0126 11:21:48.245817 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:48 crc kubenswrapper[4867]: E0126 11:21:48.245872 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-s8hx9" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="registry-server" Jan 26 11:21:48 crc kubenswrapper[4867]: E0126 11:21:48.246863 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-s8hx9.188e44067c17ac41\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-s8hx9.188e44067c17ac41 openshift-marketplace 29346 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-s8hx9,UID:adb6bffd-3a41-480b-85df-1f3489ce7007,APIVersion:v1,ResourceVersion:28283,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 11:21:38 +0000 UTC,LastTimestamp:2026-01-26 11:21:48.245912558 +0000 UTC m=+257.944487478,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 11:21:49 crc kubenswrapper[4867]: E0126 11:21:49.942369 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:49 crc kubenswrapper[4867]: E0126 11:21:49.943486 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:49 crc kubenswrapper[4867]: E0126 11:21:49.944010 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:21:49 crc kubenswrapper[4867]: E0126 11:21:49.944050 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-84kcf" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="registry-server" Jan 26 11:21:50 crc kubenswrapper[4867]: E0126 11:21:50.595702 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:50 crc kubenswrapper[4867]: I0126 11:21:50.596806 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:50 crc kubenswrapper[4867]: I0126 11:21:50.977858 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 11:21:50 crc kubenswrapper[4867]: I0126 11:21:50.980096 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 11:21:50 crc kubenswrapper[4867]: I0126 11:21:50.981020 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81" exitCode=0 Jan 26 11:21:50 crc kubenswrapper[4867]: I0126 11:21:50.981058 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b" exitCode=0 Jan 26 11:21:50 crc kubenswrapper[4867]: I0126 11:21:50.981117 4867 scope.go:117] "RemoveContainer" containerID="4d1617eeb2f19df288b984f761116ae902263d7ccf8406998d0e323fb7d52390" Jan 26 11:21:51 crc kubenswrapper[4867]: E0126 11:21:51.565041 4867 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" volumeName="registry-storage" Jan 26 11:21:51 crc kubenswrapper[4867]: I0126 11:21:51.990667 4867 generic.go:334] "Generic (PLEG): container finished" podID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" containerID="a3a2f21392ea88de6133444f0fcded2ca9c9f38aa4e2a234d938cf08b053ee23" exitCode=0 Jan 26 11:21:51 crc kubenswrapper[4867]: I0126 11:21:51.990817 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac","Type":"ContainerDied","Data":"a3a2f21392ea88de6133444f0fcded2ca9c9f38aa4e2a234d938cf08b053ee23"} Jan 26 11:21:51 crc kubenswrapper[4867]: I0126 11:21:51.991832 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.001151 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.003047 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292" exitCode=0 Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.003075 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec" exitCode=0 Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.095273 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.100984 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.101734 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.102432 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: W0126 11:21:52.139557 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-777a6d5f7e0161f3563ab16da5f8253c425f89886ee2f0cb6c47f4303f422895 WatchSource:0}: Error finding container 777a6d5f7e0161f3563ab16da5f8253c425f89886ee2f0cb6c47f4303f422895: Status 404 returned error can't find the container with id 777a6d5f7e0161f3563ab16da5f8253c425f89886ee2f0cb6c47f4303f422895 Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.159885 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.160473 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.162361 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.162679 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.165666 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.166674 4867 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.168130 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="200ms" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.201938 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.202503 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.202519 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.202042 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.202584 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.202831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.202862 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.202877 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.275183 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.276523 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.276891 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.277316 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.304241 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.305201 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.305347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-catalog-content\") pod \"adb6bffd-3a41-480b-85df-1f3489ce7007\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.306907 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.307361 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.307403 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-utilities\") pod \"adb6bffd-3a41-480b-85df-1f3489ce7007\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.307630 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.307742 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8lfb\" (UniqueName: \"kubernetes.io/projected/adb6bffd-3a41-480b-85df-1f3489ce7007-kube-api-access-c8lfb\") pod \"adb6bffd-3a41-480b-85df-1f3489ce7007\" (UID: \"adb6bffd-3a41-480b-85df-1f3489ce7007\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.308431 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-utilities" (OuterVolumeSpecName: "utilities") pod "adb6bffd-3a41-480b-85df-1f3489ce7007" (UID: "adb6bffd-3a41-480b-85df-1f3489ce7007"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.309756 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.309782 4867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.315847 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb6bffd-3a41-480b-85df-1f3489ce7007-kube-api-access-c8lfb" (OuterVolumeSpecName: "kube-api-access-c8lfb") pod "adb6bffd-3a41-480b-85df-1f3489ce7007" (UID: "adb6bffd-3a41-480b-85df-1f3489ce7007"). InnerVolumeSpecName "kube-api-access-c8lfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.370148 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="400ms" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.382440 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adb6bffd-3a41-480b-85df-1f3489ce7007" (UID: "adb6bffd-3a41-480b-85df-1f3489ce7007"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.411414 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-catalog-content\") pod \"7428579f-3d9c-4910-9e5c-b6694944afce\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.411488 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-utilities\") pod \"7428579f-3d9c-4910-9e5c-b6694944afce\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.411606 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7w4q\" (UniqueName: \"kubernetes.io/projected/7428579f-3d9c-4910-9e5c-b6694944afce-kube-api-access-z7w4q\") pod \"7428579f-3d9c-4910-9e5c-b6694944afce\" (UID: \"7428579f-3d9c-4910-9e5c-b6694944afce\") " Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.411899 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8lfb\" (UniqueName: \"kubernetes.io/projected/adb6bffd-3a41-480b-85df-1f3489ce7007-kube-api-access-c8lfb\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.411923 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb6bffd-3a41-480b-85df-1f3489ce7007-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.412441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-utilities" (OuterVolumeSpecName: "utilities") pod "7428579f-3d9c-4910-9e5c-b6694944afce" (UID: "7428579f-3d9c-4910-9e5c-b6694944afce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.414939 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7428579f-3d9c-4910-9e5c-b6694944afce-kube-api-access-z7w4q" (OuterVolumeSpecName: "kube-api-access-z7w4q") pod "7428579f-3d9c-4910-9e5c-b6694944afce" (UID: "7428579f-3d9c-4910-9e5c-b6694944afce"). InnerVolumeSpecName "kube-api-access-z7w4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.438204 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7428579f-3d9c-4910-9e5c-b6694944afce" (UID: "7428579f-3d9c-4910-9e5c-b6694944afce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.513018 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.513074 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428579f-3d9c-4910-9e5c-b6694944afce-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.513090 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7w4q\" (UniqueName: \"kubernetes.io/projected/7428579f-3d9c-4910-9e5c-b6694944afce-kube-api-access-z7w4q\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:52 crc kubenswrapper[4867]: I0126 11:21:52.595924 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 11:21:52 crc kubenswrapper[4867]: E0126 11:21:52.772266 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="800ms" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.011807 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"777a6d5f7e0161f3563ab16da5f8253c425f89886ee2f0cb6c47f4303f422895"} Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.014805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcljn" event={"ID":"aeb3191f-7e7a-4d94-b913-4f78b379f3e9","Type":"ContainerStarted","Data":"fc4c0d05681d2bd434974464141a1a719ff76e4be773df755e502a29e94b83f1"} Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.017850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbhb4" event={"ID":"4c3ed719-d8a0-4f47-b0f1-9e635825152a","Type":"ContainerStarted","Data":"3eced1ac32af38f48a6bcb738175658500315c092032f4b2b11c7c0acafb9023"} Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.024334 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s8hx9" event={"ID":"adb6bffd-3a41-480b-85df-1f3489ce7007","Type":"ContainerDied","Data":"293e7ec5fdc21304ae837bf6d1dd6c8b144c60c354ab3e71aec649e159b75162"} Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.024383 4867 scope.go:117] "RemoveContainer" containerID="bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.024438 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s8hx9" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.025330 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.025661 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.026195 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.028139 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerStarted","Data":"cee015feecf6b6c0e6418fd20b851525cc81e27d1d08ca5df4a53edd1133e04d"} Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.028322 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.028881 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.029203 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.032927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-84kcf" event={"ID":"7428579f-3d9c-4910-9e5c-b6694944afce","Type":"ContainerDied","Data":"07d9ba90be0306d5dd4c2f213ceaaa2d0a5cfe8d41f0bf6fe4c620792a9a1a81"} Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.032967 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-84kcf" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.033993 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.034327 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.036085 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.037013 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.037275 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.037487 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.043186 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.044659 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.045887 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.046252 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.046857 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.047009 4867 scope.go:117] "RemoveContainer" containerID="d47b1bea49215ec40fa8ef7c6742c36a27120ae7cf53751c9b0853f4958d2d87" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.047110 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.048290 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.048587 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.048823 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.049093 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.100655 4867 scope.go:117] "RemoveContainer" containerID="a57cbca14f24cd948af68c4e2c280fef8ea173804ae8e6a7b5581cf99bb2b93c" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.144243 4867 scope.go:117] "RemoveContainer" containerID="5c90c50380de96d77a5b4d4a4f3950044f08fc5c7f12e1de7dc477d61cf0f38a" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.169106 4867 scope.go:117] "RemoveContainer" containerID="a44eb3a42e2ee237a25ff65a4dc81c473ecb90d00c7f6113387c81bb7bc3508e" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.196759 4867 scope.go:117] "RemoveContainer" containerID="96160a369f46b18d6c27d8cd8354d694d8bf652ceb434aaf1263e83afa381b2b" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.247494 4867 scope.go:117] "RemoveContainer" containerID="08f8529b91df2d7dcbf1fac6018657124b3922dfd4ad68bd7cf315594d6e7f81" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.286296 4867 scope.go:117] "RemoveContainer" containerID="0d41fab03911c1d3b84f4c34da0d1c8e82c8f2691450b5f9663e31ce329bf04b" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.302544 4867 scope.go:117] "RemoveContainer" containerID="e7960050fb97fd034b58d207bdcc7aa794be098102ebc9b50f3b011c2079a292" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.325023 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.326154 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.326591 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.326989 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.327206 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.333177 4867 scope.go:117] "RemoveContainer" containerID="f8e9df6d5bbbaa36f6ffe9e36239da17b2a7d5d85edebc9f108ede9bf7b7c0a2" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.349464 4867 scope.go:117] "RemoveContainer" containerID="60d611d38d0a8a4fafcb8c6a4598878531015a3c9a31bee179693c997be0f5ec" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.366513 4867 scope.go:117] "RemoveContainer" containerID="b69d7e45a6059b1e01f80cc4efb536069c91cae05c8912d18fd48f25c6ebf51e" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.427676 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kubelet-dir\") pod \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.427795 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-var-lock\") pod \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.427807 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" (UID: "a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.427844 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kube-api-access\") pod \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\" (UID: \"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac\") " Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.427866 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" (UID: "a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.428120 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.428140 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.433627 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" (UID: "a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:21:53 crc kubenswrapper[4867]: I0126 11:21:53.529840 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 11:21:53 crc kubenswrapper[4867]: E0126 11:21:53.573314 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="1.6s" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.053185 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0"} Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.054136 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: E0126 11:21:54.054294 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.054622 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.054826 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.055123 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.058860 4867 generic.go:334] "Generic (PLEG): container finished" podID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerID="2dd63232d5614e03abb67203f7226de9d0cf60b0ff159a0709b2f2048ec1cb40" exitCode=0 Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.058961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6sjf" event={"ID":"331bacc3-9595-492a-9e20-ef8007ccc10a","Type":"ContainerDied","Data":"2dd63232d5614e03abb67203f7226de9d0cf60b0ff159a0709b2f2048ec1cb40"} Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.059685 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.060178 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.061046 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.061519 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.062007 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.062019 4867 generic.go:334] "Generic (PLEG): container finished" podID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerID="cee015feecf6b6c0e6418fd20b851525cc81e27d1d08ca5df4a53edd1133e04d" exitCode=0 Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.062046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerDied","Data":"cee015feecf6b6c0e6418fd20b851525cc81e27d1d08ca5df4a53edd1133e04d"} Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.062662 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.062869 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.063026 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.063243 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.063484 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.063785 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.068078 4867 generic.go:334] "Generic (PLEG): container finished" podID="8e8b11fb-b146-4307-b94e-515815b10c58" containerID="947a8014466d7f519972efd98ff839d2c1308f3ec5b27a5a4a19097a92907f4f" exitCode=0 Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.068183 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lhrp" event={"ID":"8e8b11fb-b146-4307-b94e-515815b10c58","Type":"ContainerDied","Data":"947a8014466d7f519972efd98ff839d2c1308f3ec5b27a5a4a19097a92907f4f"} Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.070407 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.070772 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.071009 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.071292 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.071553 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.071935 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.073203 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.073743 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.074130 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac","Type":"ContainerDied","Data":"446f351a8e236a33f0a1a9804911377dbb9a56e6b7164a054f7bf5623b0d1c51"} Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.074161 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446f351a8e236a33f0a1a9804911377dbb9a56e6b7164a054f7bf5623b0d1c51" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.076491 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.081681 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.082324 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.082782 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.083083 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.083423 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.083773 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.084064 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.087494 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.088826 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.089132 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.089422 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.089677 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.089923 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.090174 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.090414 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.090652 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.090884 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.091151 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.091436 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.091689 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.091966 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.092232 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.092518 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.092764 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.093015 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:54 crc kubenswrapper[4867]: I0126 11:21:54.093290 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.124977 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lhrp" event={"ID":"8e8b11fb-b146-4307-b94e-515815b10c58","Type":"ContainerStarted","Data":"9bec023aae54649bd5e6972db3f4b186a23c0aa652fffdc1741d440ab7ca3bff"} Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.127017 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.127601 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.128340 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.128705 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.128908 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.129171 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.129294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6sjf" event={"ID":"331bacc3-9595-492a-9e20-ef8007ccc10a","Type":"ContainerStarted","Data":"74887b127a16edc12c7d905b037c9331a1fafacfa6bc3f64f8564e887069afa7"} Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.129454 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.129680 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.130004 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.130214 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.130548 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.130799 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.131168 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.131643 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.132052 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.132725 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.134050 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerStarted","Data":"b2fcdc242388c72a28f0f7016a0fe723334bb302ee00b41adde79be8d3f0e45f"} Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.134582 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.134873 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.135278 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: E0126 11:21:55.135307 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.135823 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.137047 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.137534 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.138028 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: I0126 11:21:55.138483 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:55 crc kubenswrapper[4867]: E0126 11:21:55.174034 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="3.2s" Jan 26 11:21:56 crc kubenswrapper[4867]: E0126 11:21:56.164472 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-s8hx9.188e44067c17ac41\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-s8hx9.188e44067c17ac41 openshift-marketplace 29346 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-s8hx9,UID:adb6bffd-3a41-480b-85df-1f3489ce7007,APIVersion:v1,ResourceVersion:28283,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of bd843faa60796283e3e245a3a24193ef4841164381821979fa9ae2ba338d103e is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 11:21:38 +0000 UTC,LastTimestamp:2026-01-26 11:21:48.245912558 +0000 UTC m=+257.944487478,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.539029 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.539117 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.596871 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.597679 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.598094 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.598583 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.598969 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.599473 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.599931 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.600408 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.600737 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.723921 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.724073 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.776776 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.777832 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.778322 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.778562 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.778788 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.779019 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.779502 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.780279 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.780611 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.960585 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:21:57 crc kubenswrapper[4867]: I0126 11:21:57.960679 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.020537 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.021285 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.021685 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.022184 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.022923 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.023293 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.023689 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.024023 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.024386 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.190952 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.191795 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.192398 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.192818 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.193096 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.193457 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.193858 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.194109 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: I0126 11:21:58.194482 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:58 crc kubenswrapper[4867]: E0126 11:21:58.376078 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="6.4s" Jan 26 11:21:59 crc kubenswrapper[4867]: E0126 11:21:59.018898 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:21:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:21:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:21:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T11:21:59Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3295ee1e384bd13d7f93a565d0e83b4cb096da43c673235ced6ac2c39d64dfa1\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:91b55f2f378a9a1fbbda6c2423a0a3bc0c66e0dd45dee584db70782d1b7ba863\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1671873254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0c5253d77c75131b98d95f8ca879af46db12844432c90a323f3505f270bd272c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8e3686ee639460ef503498256573de69b2f292ac3ad86c9041ca8a7c7fddec01\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203150054},{\\\"names\\\":[],\\\"sizeBytes\\\":1183869170},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:169566a3a0bc4f9ca64256bb682df6ad4e2cfc5740b5338370c8202d43621680\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:5e18cee5ade3fc0cec09a5ee469d5840c7f50ec0cda6b90150394ad661ac5380\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1179648738},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:59 crc kubenswrapper[4867]: E0126 11:21:59.020797 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:59 crc kubenswrapper[4867]: E0126 11:21:59.021362 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:59 crc kubenswrapper[4867]: E0126 11:21:59.021762 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:59 crc kubenswrapper[4867]: E0126 11:21:59.022088 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:21:59 crc kubenswrapper[4867]: E0126 11:21:59.022114 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.171182 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.171314 4867 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a" exitCode=1 Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.171415 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a"} Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.172056 4867 scope.go:117] "RemoveContainer" containerID="7ef0db1149c70e41b7f44d00d43f883d14fcf635d6f34462dd37c8a550a15a4a" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.173639 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.174384 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.174818 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.175439 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.175671 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.175886 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.176051 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.176242 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.176407 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.563602 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.567345 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.567767 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.568576 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.568955 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.569377 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.569672 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.570000 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.570287 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.570567 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.570974 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.571256 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.571577 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.571941 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.572175 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.572480 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.572771 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.573121 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.573336 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.585036 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.585085 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:00 crc kubenswrapper[4867]: E0126 11:22:00.585773 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:00 crc kubenswrapper[4867]: I0126 11:22:00.586382 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.095658 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.096166 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.144371 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.145409 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.145677 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.146212 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.147054 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.147482 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.147969 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.148307 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.148901 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.149244 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.180602 4867 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e612a4cf02f21288dfd6ff4e57ffc15609231950b6ded4a2d47aaf20e09cfd69" exitCode=0 Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.180690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e612a4cf02f21288dfd6ff4e57ffc15609231950b6ded4a2d47aaf20e09cfd69"} Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.180723 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"45f9da8d2554a1ac82f8dab7c072bdbacc452322821edc12805d7ec4c0ef7341"} Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.180957 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.180970 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:01 crc kubenswrapper[4867]: E0126 11:22:01.181396 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.181399 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.182144 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.182523 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.182873 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.183183 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.183519 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.183858 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.184139 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.184451 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.186852 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.187298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3427e0c828d3f6ab5160f9cd39c96b959a73552c11763a9a62ef2376179f448f"} Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.188257 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.188582 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.188792 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.189059 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.189370 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.189645 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.189878 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.190085 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.190295 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.229751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.230668 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.231309 4867 status_manager.go:851] "Failed to get status for pod" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" pod="openshift-marketplace/community-operators-gcljn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gcljn\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.231978 4867 status_manager.go:851] "Failed to get status for pod" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" pod="openshift-marketplace/redhat-operators-h6sjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h6sjf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.232607 4867 status_manager.go:851] "Failed to get status for pod" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.233069 4867 status_manager.go:851] "Failed to get status for pod" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" pod="openshift-marketplace/community-operators-s8hx9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s8hx9\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.233537 4867 status_manager.go:851] "Failed to get status for pod" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" pod="openshift-marketplace/redhat-marketplace-84kcf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-84kcf\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.233888 4867 status_manager.go:851] "Failed to get status for pod" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" pod="openshift-marketplace/redhat-operators-mbhb4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mbhb4\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.234428 4867 status_manager.go:851] "Failed to get status for pod" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" pod="openshift-marketplace/certified-operators-4lhrp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4lhrp\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.234741 4867 status_manager.go:851] "Failed to get status for pod" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" pod="openshift-marketplace/certified-operators-ndd6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ndd6w\": dial tcp 38.102.83.115:6443: connect: connection refused" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.497426 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.497666 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:22:01 crc kubenswrapper[4867]: I0126 11:22:01.543411 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:22:02 crc kubenswrapper[4867]: I0126 11:22:02.209102 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3037e8950038f505a9c790b22e34b76ec09fcf56804b0aead759275e21871797"} Jan 26 11:22:02 crc kubenswrapper[4867]: I0126 11:22:02.209552 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ae0fff0d5017c49163f49ffece4381f062e435620c9297d5bd41a2f4a562fe02"} Jan 26 11:22:02 crc kubenswrapper[4867]: I0126 11:22:02.209568 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2bce4a32fcfd93bcd86a328e4bb23464d5e0905f6af68561cdaa229bcdd94d46"} Jan 26 11:22:02 crc kubenswrapper[4867]: I0126 11:22:02.209579 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"642bae33d41a2c53cd29cf859aa0b60abb30a60d9ae95cc4c8b3a55c8c6e08c6"} Jan 26 11:22:02 crc kubenswrapper[4867]: I0126 11:22:02.264322 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:22:03 crc kubenswrapper[4867]: I0126 11:22:03.218721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"849af6ba21a8de517c98aa91555d4900d36d7d817eaca72a7d44ade28b6e67d1"} Jan 26 11:22:03 crc kubenswrapper[4867]: I0126 11:22:03.219203 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:03 crc kubenswrapper[4867]: I0126 11:22:03.219261 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:05 crc kubenswrapper[4867]: I0126 11:22:05.061041 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:22:05 crc kubenswrapper[4867]: I0126 11:22:05.586792 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:05 crc kubenswrapper[4867]: I0126 11:22:05.587302 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:05 crc kubenswrapper[4867]: I0126 11:22:05.594093 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:07 crc kubenswrapper[4867]: I0126 11:22:07.595053 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:22:08 crc kubenswrapper[4867]: I0126 11:22:08.008003 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:22:08 crc kubenswrapper[4867]: I0126 11:22:08.145343 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:22:08 crc kubenswrapper[4867]: I0126 11:22:08.152685 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:22:08 crc kubenswrapper[4867]: I0126 11:22:08.242806 4867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:09 crc kubenswrapper[4867]: I0126 11:22:09.254136 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:09 crc kubenswrapper[4867]: I0126 11:22:09.254179 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:09 crc kubenswrapper[4867]: I0126 11:22:09.254660 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:09 crc kubenswrapper[4867]: I0126 11:22:09.261824 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:10 crc kubenswrapper[4867]: I0126 11:22:10.258528 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:10 crc kubenswrapper[4867]: I0126 11:22:10.258909 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:10 crc kubenswrapper[4867]: I0126 11:22:10.585431 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e73cb70d-4f6d-4ddf-8ed4-252c312bb715" Jan 26 11:22:11 crc kubenswrapper[4867]: I0126 11:22:11.264052 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:11 crc kubenswrapper[4867]: I0126 11:22:11.264108 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e36e94ce-bdbb-4b65-b38a-d591d99ec132" Jan 26 11:22:11 crc kubenswrapper[4867]: I0126 11:22:11.267869 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e73cb70d-4f6d-4ddf-8ed4-252c312bb715" Jan 26 11:22:17 crc kubenswrapper[4867]: I0126 11:22:15.065659 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 11:22:17 crc kubenswrapper[4867]: I0126 11:22:17.908504 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 11:22:18 crc kubenswrapper[4867]: I0126 11:22:18.379094 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 11:22:18 crc kubenswrapper[4867]: I0126 11:22:18.381063 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 11:22:18 crc kubenswrapper[4867]: I0126 11:22:18.515656 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 11:22:18 crc kubenswrapper[4867]: I0126 11:22:18.801664 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 11:22:18 crc kubenswrapper[4867]: I0126 11:22:18.849251 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 11:22:18 crc kubenswrapper[4867]: I0126 11:22:18.949747 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 11:22:19 crc kubenswrapper[4867]: I0126 11:22:19.104876 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 11:22:19 crc kubenswrapper[4867]: I0126 11:22:19.221592 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 11:22:19 crc kubenswrapper[4867]: I0126 11:22:19.675861 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 11:22:19 crc kubenswrapper[4867]: I0126 11:22:19.779298 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 11:22:19 crc kubenswrapper[4867]: I0126 11:22:19.879038 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 11:22:20 crc kubenswrapper[4867]: I0126 11:22:20.006924 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 11:22:20 crc kubenswrapper[4867]: I0126 11:22:20.671554 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 11:22:20 crc kubenswrapper[4867]: I0126 11:22:20.699184 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 11:22:20 crc kubenswrapper[4867]: I0126 11:22:20.805115 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 11:22:20 crc kubenswrapper[4867]: I0126 11:22:20.912193 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 11:22:20 crc kubenswrapper[4867]: I0126 11:22:20.994015 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.050799 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.136996 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.164883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.164962 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.249595 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.292941 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.537088 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.596390 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.601652 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.741821 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.750313 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.789859 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.806652 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.847432 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.855677 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.860470 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.893776 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 11:22:21 crc kubenswrapper[4867]: I0126 11:22:21.925277 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.000110 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.034439 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.068639 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.092575 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.120412 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.157782 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.167517 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.225807 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.537313 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.642299 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.661281 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.689760 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.715180 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.856746 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.864709 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.891329 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.892553 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.941359 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 11:22:22 crc kubenswrapper[4867]: I0126 11:22:22.984843 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.019002 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.020101 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.074417 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.103812 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.182238 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.312863 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.375206 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.387207 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.497213 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.591241 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.643606 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.654005 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.698212 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.823855 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.868176 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.897904 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.897971 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 11:22:23 crc kubenswrapper[4867]: I0126 11:22:23.965780 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.046484 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.104397 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.152330 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.320775 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.368878 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.376742 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.454679 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.488689 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.503534 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.543261 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.557193 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.591904 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.598062 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.627630 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.647379 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.729186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.784001 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.889818 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.911729 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.916557 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 11:22:24 crc kubenswrapper[4867]: I0126 11:22:24.951631 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.054716 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.065025 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.086247 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.151443 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.186527 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.192284 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.198383 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.227316 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.381423 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.392543 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.481916 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.526185 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.531660 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.537875 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.573707 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 11:22:25 crc kubenswrapper[4867]: I0126 11:22:25.594546 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.002545 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.056065 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.094101 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.205548 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.205729 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.251721 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.305100 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.335283 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.335823 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.414926 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.438933 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.475432 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.685596 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.690196 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.692037 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.711343 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.716750 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.863692 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.868611 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.869280 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 11:22:26 crc kubenswrapper[4867]: I0126 11:22:26.991102 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.002504 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.037424 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.131085 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.279774 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.348246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.393088 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.526931 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.572092 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.611459 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.617890 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.618194 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.657451 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.718021 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.751592 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.776386 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.847449 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.863787 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.880343 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.898872 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.916334 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.945907 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.956256 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 11:22:27 crc kubenswrapper[4867]: I0126 11:22:27.959075 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.006161 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.016063 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.061342 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.104449 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.169124 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.198643 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.307667 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.352070 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.354136 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.377879 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.661696 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.678705 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.756520 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.763471 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.826443 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.842477 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.866281 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 11:22:28 crc kubenswrapper[4867]: I0126 11:22:28.997505 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.155078 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.164165 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.220573 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.257289 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.263873 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.264494 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.285825 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.292179 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.304985 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.533698 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.645791 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.678556 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.852617 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.856528 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.924460 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 11:22:29 crc kubenswrapper[4867]: I0126 11:22:29.985276 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.011812 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.018178 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.077822 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.085423 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.144187 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.153996 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.157306 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4lhrp" podStartSLOduration=39.531382336 podStartE2EDuration="2m23.157281678s" podCreationTimestamp="2026-01-26 11:20:07 +0000 UTC" firstStartedPulling="2026-01-26 11:20:10.955107356 +0000 UTC m=+160.653682266" lastFinishedPulling="2026-01-26 11:21:54.581006698 +0000 UTC m=+264.279581608" observedRunningTime="2026-01-26 11:22:08.31559689 +0000 UTC m=+278.014171800" watchObservedRunningTime="2026-01-26 11:22:30.157281678 +0000 UTC m=+299.855856608" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.157456 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ndd6w" podStartSLOduration=39.606035163 podStartE2EDuration="2m23.157451492s" podCreationTimestamp="2026-01-26 11:20:07 +0000 UTC" firstStartedPulling="2026-01-26 11:20:10.952966909 +0000 UTC m=+160.651541819" lastFinishedPulling="2026-01-26 11:21:54.504383238 +0000 UTC m=+264.202958148" observedRunningTime="2026-01-26 11:22:08.355404316 +0000 UTC m=+278.053979226" watchObservedRunningTime="2026-01-26 11:22:30.157451492 +0000 UTC m=+299.856026402" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.157819 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6sjf" podStartSLOduration=38.732530821 podStartE2EDuration="2m20.157810012s" podCreationTimestamp="2026-01-26 11:20:10 +0000 UTC" firstStartedPulling="2026-01-26 11:20:13.105558589 +0000 UTC m=+162.804133489" lastFinishedPulling="2026-01-26 11:21:54.53083777 +0000 UTC m=+264.229412680" observedRunningTime="2026-01-26 11:22:08.141053271 +0000 UTC m=+277.839628191" watchObservedRunningTime="2026-01-26 11:22:30.157810012 +0000 UTC m=+299.856384912" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.158530 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbhb4" podStartSLOduration=40.184182515 podStartE2EDuration="2m19.158519542s" podCreationTimestamp="2026-01-26 11:20:11 +0000 UTC" firstStartedPulling="2026-01-26 11:20:13.097258126 +0000 UTC m=+162.795833036" lastFinishedPulling="2026-01-26 11:21:52.071595153 +0000 UTC m=+261.770170063" observedRunningTime="2026-01-26 11:22:08.276685569 +0000 UTC m=+277.975260489" watchObservedRunningTime="2026-01-26 11:22:30.158519542 +0000 UTC m=+299.857094472" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.159582 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gcljn" podStartSLOduration=42.052049396 podStartE2EDuration="2m23.15957646s" podCreationTimestamp="2026-01-26 11:20:07 +0000 UTC" firstStartedPulling="2026-01-26 11:20:10.981075843 +0000 UTC m=+160.679650753" lastFinishedPulling="2026-01-26 11:21:52.088602907 +0000 UTC m=+261.787177817" observedRunningTime="2026-01-26 11:22:08.115963357 +0000 UTC m=+277.814538307" watchObservedRunningTime="2026-01-26 11:22:30.15957646 +0000 UTC m=+299.858151370" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.160359 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-marketplace-84kcf","openshift-marketplace/community-operators-s8hx9"] Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.160437 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.165728 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.184943 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.184913501 podStartE2EDuration="22.184913501s" podCreationTimestamp="2026-01-26 11:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:22:30.181992602 +0000 UTC m=+299.880567522" watchObservedRunningTime="2026-01-26 11:22:30.184913501 +0000 UTC m=+299.883488421" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.185560 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.195010 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.246059 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.353510 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.404431 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.426195 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.438988 4867 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.470621 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.476824 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.477898 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.517260 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.587239 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" path="/var/lib/kubelet/pods/7428579f-3d9c-4910-9e5c-b6694944afce/volumes" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.587801 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.588774 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" path="/var/lib/kubelet/pods/adb6bffd-3a41-480b-85df-1f3489ce7007/volumes" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.592833 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.596111 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.683968 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.684168 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.686143 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.785047 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.835535 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.866264 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.866602 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0" gracePeriod=5 Jan 26 11:22:30 crc kubenswrapper[4867]: I0126 11:22:30.869559 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.040461 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.068344 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.145848 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.167207 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.297569 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.325232 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.336591 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.355065 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.375636 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.546684 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.547110 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.549200 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.614329 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.689018 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.712349 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.913729 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.925799 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 11:22:31 crc kubenswrapper[4867]: I0126 11:22:31.982714 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.073019 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.233121 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.271095 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.272620 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.438840 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.455931 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.610029 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.656701 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4lhrp"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.657121 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4lhrp" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="registry-server" containerID="cri-o://9bec023aae54649bd5e6972db3f4b186a23c0aa652fffdc1741d440ab7ca3bff" gracePeriod=30 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.669346 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndd6w"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.670187 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ndd6w" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="registry-server" containerID="cri-o://b2fcdc242388c72a28f0f7016a0fe723334bb302ee00b41adde79be8d3f0e45f" gracePeriod=30 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.670594 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.679390 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcljn"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.679940 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gcljn" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="registry-server" containerID="cri-o://fc4c0d05681d2bd434974464141a1a719ff76e4be773df755e502a29e94b83f1" gracePeriod=30 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.688754 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrqxh"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.689011 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerName="marketplace-operator" containerID="cri-o://4deb98f17f433fd2b4b2ffb352d38a21e7a46d7680d0cdcd4e67da663af753b1" gracePeriod=30 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.706481 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nstb5"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.706825 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nstb5" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="registry-server" containerID="cri-o://47e5557aa3f7bb7109b9cc127b56f2340aa6ef931e3058f9cba508804408e85f" gracePeriod=30 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.728559 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6sjf"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.728978 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6sjf" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="registry-server" containerID="cri-o://74887b127a16edc12c7d905b037c9331a1fafacfa6bc3f64f8564e887069afa7" gracePeriod=30 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.756350 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbhb4"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.756920 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbhb4" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="registry-server" containerID="cri-o://3eced1ac32af38f48a6bcb738175658500315c092032f4b2b11c7c0acafb9023" gracePeriod=30 Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.779403 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f7s2h"] Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.779935 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="extract-content" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.779984 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="extract-content" Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.780015 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="extract-utilities" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780030 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="extract-utilities" Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.780050 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="extract-utilities" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780064 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="extract-utilities" Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.780081 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="registry-server" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780094 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="registry-server" Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.780119 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" containerName="installer" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780131 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" containerName="installer" Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.780158 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780170 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.780187 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="registry-server" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780199 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="registry-server" Jan 26 11:22:32 crc kubenswrapper[4867]: E0126 11:22:32.780248 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="extract-content" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780263 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="extract-content" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780444 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780461 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8dbe61a-4c85-4fa5-beb6-a84433a2d2ac" containerName="installer" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780476 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7428579f-3d9c-4910-9e5c-b6694944afce" containerName="registry-server" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.780502 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb6bffd-3a41-480b-85df-1f3489ce7007" containerName="registry-server" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.781296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.787653 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f7s2h"] Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.826422 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.920312 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.920390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fv6b\" (UniqueName: \"kubernetes.io/projected/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-kube-api-access-2fv6b\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:32 crc kubenswrapper[4867]: I0126 11:22:32.920620 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.005923 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.021598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.021675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.021716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fv6b\" (UniqueName: \"kubernetes.io/projected/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-kube-api-access-2fv6b\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.023797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.030439 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.044448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fv6b\" (UniqueName: \"kubernetes.io/projected/d30c958f-102e-4d3f-a3e1-853ad02e7bfe-kube-api-access-2fv6b\") pod \"marketplace-operator-79b997595-f7s2h\" (UID: \"d30c958f-102e-4d3f-a3e1-853ad02e7bfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.067925 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.176717 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.406387 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f7s2h"] Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.424530 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.433010 4867 generic.go:334] "Generic (PLEG): container finished" podID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerID="fc4c0d05681d2bd434974464141a1a719ff76e4be773df755e502a29e94b83f1" exitCode=0 Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.433115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcljn" event={"ID":"aeb3191f-7e7a-4d94-b913-4f78b379f3e9","Type":"ContainerDied","Data":"fc4c0d05681d2bd434974464141a1a719ff76e4be773df755e502a29e94b83f1"} Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.438617 4867 generic.go:334] "Generic (PLEG): container finished" podID="8e8b11fb-b146-4307-b94e-515815b10c58" containerID="9bec023aae54649bd5e6972db3f4b186a23c0aa652fffdc1741d440ab7ca3bff" exitCode=0 Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.438667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lhrp" event={"ID":"8e8b11fb-b146-4307-b94e-515815b10c58","Type":"ContainerDied","Data":"9bec023aae54649bd5e6972db3f4b186a23c0aa652fffdc1741d440ab7ca3bff"} Jan 26 11:22:33 crc kubenswrapper[4867]: W0126 11:22:33.462405 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30c958f_102e_4d3f_a3e1_853ad02e7bfe.slice/crio-4bf5fc79d9c8f59c6970309c4df0a6f4e844a7c1f9c07e00a5e39245f64abb55 WatchSource:0}: Error finding container 4bf5fc79d9c8f59c6970309c4df0a6f4e844a7c1f9c07e00a5e39245f64abb55: Status 404 returned error can't find the container with id 4bf5fc79d9c8f59c6970309c4df0a6f4e844a7c1f9c07e00a5e39245f64abb55 Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.545194 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.589674 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.634948 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.698771 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 11:22:33 crc kubenswrapper[4867]: I0126 11:22:33.810470 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.134393 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.476715 4867 generic.go:334] "Generic (PLEG): container finished" podID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerID="3eced1ac32af38f48a6bcb738175658500315c092032f4b2b11c7c0acafb9023" exitCode=0 Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.476837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbhb4" event={"ID":"4c3ed719-d8a0-4f47-b0f1-9e635825152a","Type":"ContainerDied","Data":"3eced1ac32af38f48a6bcb738175658500315c092032f4b2b11c7c0acafb9023"} Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.480115 4867 generic.go:334] "Generic (PLEG): container finished" podID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerID="4deb98f17f433fd2b4b2ffb352d38a21e7a46d7680d0cdcd4e67da663af753b1" exitCode=0 Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.480196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" event={"ID":"6ab404f5-5b14-49d4-80f4-2a84895d0a2f","Type":"ContainerDied","Data":"4deb98f17f433fd2b4b2ffb352d38a21e7a46d7680d0cdcd4e67da663af753b1"} Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.482086 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" event={"ID":"d30c958f-102e-4d3f-a3e1-853ad02e7bfe","Type":"ContainerStarted","Data":"0b458fced3e46570734dacc80535ae87da4c4f14f30d4a3b31fe72353e39cb98"} Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.482135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" event={"ID":"d30c958f-102e-4d3f-a3e1-853ad02e7bfe","Type":"ContainerStarted","Data":"4bf5fc79d9c8f59c6970309c4df0a6f4e844a7c1f9c07e00a5e39245f64abb55"} Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.482516 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.485886 4867 generic.go:334] "Generic (PLEG): container finished" podID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerID="74887b127a16edc12c7d905b037c9331a1fafacfa6bc3f64f8564e887069afa7" exitCode=0 Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.485969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6sjf" event={"ID":"331bacc3-9595-492a-9e20-ef8007ccc10a","Type":"ContainerDied","Data":"74887b127a16edc12c7d905b037c9331a1fafacfa6bc3f64f8564e887069afa7"} Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.488969 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-f7s2h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.489019 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" podUID="d30c958f-102e-4d3f-a3e1-853ad02e7bfe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.492004 4867 generic.go:334] "Generic (PLEG): container finished" podID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerID="b2fcdc242388c72a28f0f7016a0fe723334bb302ee00b41adde79be8d3f0e45f" exitCode=0 Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.492064 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerDied","Data":"b2fcdc242388c72a28f0f7016a0fe723334bb302ee00b41adde79be8d3f0e45f"} Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.494701 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf513a52-cfc2-49df-be04-4976f7399901" containerID="47e5557aa3f7bb7109b9cc127b56f2340aa6ef931e3058f9cba508804408e85f" exitCode=0 Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.494736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nstb5" event={"ID":"bf513a52-cfc2-49df-be04-4976f7399901","Type":"ContainerDied","Data":"47e5557aa3f7bb7109b9cc127b56f2340aa6ef931e3058f9cba508804408e85f"} Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.503154 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" podStartSLOduration=2.503129956 podStartE2EDuration="2.503129956s" podCreationTimestamp="2026-01-26 11:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:22:34.502920531 +0000 UTC m=+304.201495451" watchObservedRunningTime="2026-01-26 11:22:34.503129956 +0000 UTC m=+304.201704866" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.558173 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.647731 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ktvd\" (UniqueName: \"kubernetes.io/projected/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-kube-api-access-2ktvd\") pod \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.647847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-catalog-content\") pod \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.647896 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-utilities\") pod \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\" (UID: \"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.651582 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-utilities" (OuterVolumeSpecName: "utilities") pod "19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" (UID: "19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.659502 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-kube-api-access-2ktvd" (OuterVolumeSpecName: "kube-api-access-2ktvd") pod "19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" (UID: "19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5"). InnerVolumeSpecName "kube-api-access-2ktvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.684010 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.697830 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.711200 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.716299 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.718254 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.723820 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.724310 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" (UID: "19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.730648 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.753657 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-utilities\") pod \"331bacc3-9595-492a-9e20-ef8007ccc10a\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.753770 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87b94\" (UniqueName: \"kubernetes.io/projected/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-kube-api-access-87b94\") pod \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.753852 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-catalog-content\") pod \"331bacc3-9595-492a-9e20-ef8007ccc10a\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.753925 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-trusted-ca\") pod \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.753993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-operator-metrics\") pod \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\" (UID: \"6ab404f5-5b14-49d4-80f4-2a84895d0a2f\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.754020 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt67p\" (UniqueName: \"kubernetes.io/projected/331bacc3-9595-492a-9e20-ef8007ccc10a-kube-api-access-tt67p\") pod \"331bacc3-9595-492a-9e20-ef8007ccc10a\" (UID: \"331bacc3-9595-492a-9e20-ef8007ccc10a\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.754391 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ktvd\" (UniqueName: \"kubernetes.io/projected/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-kube-api-access-2ktvd\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.754412 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.754422 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.755833 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "6ab404f5-5b14-49d4-80f4-2a84895d0a2f" (UID: "6ab404f5-5b14-49d4-80f4-2a84895d0a2f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.756697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-utilities" (OuterVolumeSpecName: "utilities") pod "331bacc3-9595-492a-9e20-ef8007ccc10a" (UID: "331bacc3-9595-492a-9e20-ef8007ccc10a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.759727 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "6ab404f5-5b14-49d4-80f4-2a84895d0a2f" (UID: "6ab404f5-5b14-49d4-80f4-2a84895d0a2f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.759984 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331bacc3-9595-492a-9e20-ef8007ccc10a-kube-api-access-tt67p" (OuterVolumeSpecName: "kube-api-access-tt67p") pod "331bacc3-9595-492a-9e20-ef8007ccc10a" (UID: "331bacc3-9595-492a-9e20-ef8007ccc10a"). InnerVolumeSpecName "kube-api-access-tt67p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.795834 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-kube-api-access-87b94" (OuterVolumeSpecName: "kube-api-access-87b94") pod "6ab404f5-5b14-49d4-80f4-2a84895d0a2f" (UID: "6ab404f5-5b14-49d4-80f4-2a84895d0a2f"). InnerVolumeSpecName "kube-api-access-87b94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.855615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-catalog-content\") pod \"8e8b11fb-b146-4307-b94e-515815b10c58\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.855714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-utilities\") pod \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.855758 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6w66\" (UniqueName: \"kubernetes.io/projected/bf513a52-cfc2-49df-be04-4976f7399901-kube-api-access-t6w66\") pod \"bf513a52-cfc2-49df-be04-4976f7399901\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.855783 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-catalog-content\") pod \"bf513a52-cfc2-49df-be04-4976f7399901\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.855801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-catalog-content\") pod \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.855962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-utilities\") pod \"bf513a52-cfc2-49df-be04-4976f7399901\" (UID: \"bf513a52-cfc2-49df-be04-4976f7399901\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.856032 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-utilities\") pod \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.856064 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-utilities\") pod \"8e8b11fb-b146-4307-b94e-515815b10c58\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.856115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjghk\" (UniqueName: \"kubernetes.io/projected/8e8b11fb-b146-4307-b94e-515815b10c58-kube-api-access-rjghk\") pod \"8e8b11fb-b146-4307-b94e-515815b10c58\" (UID: \"8e8b11fb-b146-4307-b94e-515815b10c58\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.856157 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-catalog-content\") pod \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.857205 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-utilities" (OuterVolumeSpecName: "utilities") pod "bf513a52-cfc2-49df-be04-4976f7399901" (UID: "bf513a52-cfc2-49df-be04-4976f7399901"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.859974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-utilities" (OuterVolumeSpecName: "utilities") pod "aeb3191f-7e7a-4d94-b913-4f78b379f3e9" (UID: "aeb3191f-7e7a-4d94-b913-4f78b379f3e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.860904 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-utilities" (OuterVolumeSpecName: "utilities") pod "8e8b11fb-b146-4307-b94e-515815b10c58" (UID: "8e8b11fb-b146-4307-b94e-515815b10c58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.860987 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b92w\" (UniqueName: \"kubernetes.io/projected/4c3ed719-d8a0-4f47-b0f1-9e635825152a-kube-api-access-9b92w\") pod \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\" (UID: \"4c3ed719-d8a0-4f47-b0f1-9e635825152a\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.861888 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4699\" (UniqueName: \"kubernetes.io/projected/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-kube-api-access-k4699\") pod \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\" (UID: \"aeb3191f-7e7a-4d94-b913-4f78b379f3e9\") " Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.862331 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-utilities" (OuterVolumeSpecName: "utilities") pod "4c3ed719-d8a0-4f47-b0f1-9e635825152a" (UID: "4c3ed719-d8a0-4f47-b0f1-9e635825152a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.862863 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf513a52-cfc2-49df-be04-4976f7399901-kube-api-access-t6w66" (OuterVolumeSpecName: "kube-api-access-t6w66") pod "bf513a52-cfc2-49df-be04-4976f7399901" (UID: "bf513a52-cfc2-49df-be04-4976f7399901"). InnerVolumeSpecName "kube-api-access-t6w66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863047 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863087 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863099 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863112 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt67p\" (UniqueName: \"kubernetes.io/projected/331bacc3-9595-492a-9e20-ef8007ccc10a-kube-api-access-tt67p\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863121 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863132 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863143 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863240 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87b94\" (UniqueName: \"kubernetes.io/projected/6ab404f5-5b14-49d4-80f4-2a84895d0a2f-kube-api-access-87b94\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863251 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863260 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6w66\" (UniqueName: \"kubernetes.io/projected/bf513a52-cfc2-49df-be04-4976f7399901-kube-api-access-t6w66\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.863657 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8b11fb-b146-4307-b94e-515815b10c58-kube-api-access-rjghk" (OuterVolumeSpecName: "kube-api-access-rjghk") pod "8e8b11fb-b146-4307-b94e-515815b10c58" (UID: "8e8b11fb-b146-4307-b94e-515815b10c58"). InnerVolumeSpecName "kube-api-access-rjghk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.866034 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c3ed719-d8a0-4f47-b0f1-9e635825152a-kube-api-access-9b92w" (OuterVolumeSpecName: "kube-api-access-9b92w") pod "4c3ed719-d8a0-4f47-b0f1-9e635825152a" (UID: "4c3ed719-d8a0-4f47-b0f1-9e635825152a"). InnerVolumeSpecName "kube-api-access-9b92w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.866247 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-kube-api-access-k4699" (OuterVolumeSpecName: "kube-api-access-k4699") pod "aeb3191f-7e7a-4d94-b913-4f78b379f3e9" (UID: "aeb3191f-7e7a-4d94-b913-4f78b379f3e9"). InnerVolumeSpecName "kube-api-access-k4699". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.891682 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf513a52-cfc2-49df-be04-4976f7399901" (UID: "bf513a52-cfc2-49df-be04-4976f7399901"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.940870 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "331bacc3-9595-492a-9e20-ef8007ccc10a" (UID: "331bacc3-9595-492a-9e20-ef8007ccc10a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.941885 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e8b11fb-b146-4307-b94e-515815b10c58" (UID: "8e8b11fb-b146-4307-b94e-515815b10c58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.949864 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aeb3191f-7e7a-4d94-b913-4f78b379f3e9" (UID: "aeb3191f-7e7a-4d94-b913-4f78b379f3e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.964253 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjghk\" (UniqueName: \"kubernetes.io/projected/8e8b11fb-b146-4307-b94e-515815b10c58-kube-api-access-rjghk\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.964292 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.964302 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b92w\" (UniqueName: \"kubernetes.io/projected/4c3ed719-d8a0-4f47-b0f1-9e635825152a-kube-api-access-9b92w\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.964312 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4699\" (UniqueName: \"kubernetes.io/projected/aeb3191f-7e7a-4d94-b913-4f78b379f3e9-kube-api-access-k4699\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.964320 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8b11fb-b146-4307-b94e-515815b10c58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.964329 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf513a52-cfc2-49df-be04-4976f7399901-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:34 crc kubenswrapper[4867]: I0126 11:22:34.964339 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/331bacc3-9595-492a-9e20-ef8007ccc10a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.015348 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c3ed719-d8a0-4f47-b0f1-9e635825152a" (UID: "4c3ed719-d8a0-4f47-b0f1-9e635825152a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.065568 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c3ed719-d8a0-4f47-b0f1-9e635825152a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.502299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6sjf" event={"ID":"331bacc3-9595-492a-9e20-ef8007ccc10a","Type":"ContainerDied","Data":"82c0d865102e1d222690a85bdb5ce1ec196b42b62c512ed0dc8f9508052f95fe"} Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.503901 4867 scope.go:117] "RemoveContainer" containerID="74887b127a16edc12c7d905b037c9331a1fafacfa6bc3f64f8564e887069afa7" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.502513 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6sjf" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.505625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndd6w" event={"ID":"19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5","Type":"ContainerDied","Data":"41279b884ccc25176c3e4a7b6e499441ce118c2ec9026ee4fb49fb5ef170c8e3"} Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.505754 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndd6w" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.510880 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lhrp" event={"ID":"8e8b11fb-b146-4307-b94e-515815b10c58","Type":"ContainerDied","Data":"f9c0abfc77ee202859b5e42e1e2159db15a1076e7a3e59e2f3362ef707ff808e"} Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.511169 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lhrp" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.516383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nstb5" event={"ID":"bf513a52-cfc2-49df-be04-4976f7399901","Type":"ContainerDied","Data":"b99c7215fc87ffd1b42d1f5c99696a4a85556ab16606e2d4b325743d58f3f170"} Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.516533 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nstb5" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.524293 4867 scope.go:117] "RemoveContainer" containerID="2dd63232d5614e03abb67203f7226de9d0cf60b0ff159a0709b2f2048ec1cb40" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.525640 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcljn" event={"ID":"aeb3191f-7e7a-4d94-b913-4f78b379f3e9","Type":"ContainerDied","Data":"c26e8acc208f5f08699660a2eaeffc6dca23fcbfab229c6ab366faef4c30634d"} Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.525777 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcljn" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.529706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbhb4" event={"ID":"4c3ed719-d8a0-4f47-b0f1-9e635825152a","Type":"ContainerDied","Data":"08bb4ab708f036e9f405daf729174e8a2d3766e77d65178196176ba7cd984cdf"} Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.529793 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbhb4" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.532115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" event={"ID":"6ab404f5-5b14-49d4-80f4-2a84895d0a2f","Type":"ContainerDied","Data":"2826e4fe61c3d7fe19de295670118775762f2e6d5ccf1e5b369ff1944f2d251b"} Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.532206 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrqxh" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.536338 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f7s2h" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.558572 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6sjf"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.568127 4867 scope.go:117] "RemoveContainer" containerID="79711ed2b7e82452e1b4dc06984970b24f5b20ab3fdd9da15a31090b0d5f3a2d" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.574400 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6sjf"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.611609 4867 scope.go:117] "RemoveContainer" containerID="b2fcdc242388c72a28f0f7016a0fe723334bb302ee00b41adde79be8d3f0e45f" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.612536 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4lhrp"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.620560 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4lhrp"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.627446 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nstb5"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.636517 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nstb5"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.643256 4867 scope.go:117] "RemoveContainer" containerID="cee015feecf6b6c0e6418fd20b851525cc81e27d1d08ca5df4a53edd1133e04d" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.659311 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbhb4"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.673001 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbhb4"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.686014 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndd6w"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.687641 4867 scope.go:117] "RemoveContainer" containerID="5f4cf4c375a217e18b534e7dee1f20ae60a18095dbc5d513b2231f46b0bc301d" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.691280 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ndd6w"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.692840 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrqxh"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.696307 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrqxh"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.698670 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcljn"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.700351 4867 scope.go:117] "RemoveContainer" containerID="9bec023aae54649bd5e6972db3f4b186a23c0aa652fffdc1741d440ab7ca3bff" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.701240 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gcljn"] Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.712854 4867 scope.go:117] "RemoveContainer" containerID="947a8014466d7f519972efd98ff839d2c1308f3ec5b27a5a4a19097a92907f4f" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.731525 4867 scope.go:117] "RemoveContainer" containerID="3053d35b465551fd8b97b8321f0c6c33484bf381e37f2bf1234820e4df57737f" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.747501 4867 scope.go:117] "RemoveContainer" containerID="47e5557aa3f7bb7109b9cc127b56f2340aa6ef931e3058f9cba508804408e85f" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.761807 4867 scope.go:117] "RemoveContainer" containerID="90eb899c89f78bb7c50dd42a0e756ee645c8199eef51014683883117569bb516" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.795380 4867 scope.go:117] "RemoveContainer" containerID="3715b8b883e07b4da58c22940d64775022219675ef2406542d6e8bdb2b5ad624" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.813158 4867 scope.go:117] "RemoveContainer" containerID="fc4c0d05681d2bd434974464141a1a719ff76e4be773df755e502a29e94b83f1" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.829634 4867 scope.go:117] "RemoveContainer" containerID="57ccb05b9c7e61863eec14d09ce87af9b7cf78d22d37fad9abeb94770461b9a4" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.846036 4867 scope.go:117] "RemoveContainer" containerID="befd8d762f53551f5b5e4e33da373b7bf64718acb2fd2b021ea7321d878b11ee" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.870907 4867 scope.go:117] "RemoveContainer" containerID="3eced1ac32af38f48a6bcb738175658500315c092032f4b2b11c7c0acafb9023" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.904622 4867 scope.go:117] "RemoveContainer" containerID="6e2177ccc45b2042cf0ed5e2e6509ece78f6670abc94782f5a4a9dbd6ae706de" Jan 26 11:22:35 crc kubenswrapper[4867]: I0126 11:22:35.992654 4867 scope.go:117] "RemoveContainer" containerID="b97361027d79323abac8b8c10feb5b6683ccf4a76aaa6aada100285864f82dae" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.008963 4867 scope.go:117] "RemoveContainer" containerID="4deb98f17f433fd2b4b2ffb352d38a21e7a46d7680d0cdcd4e67da663af753b1" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.425285 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.425384 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493545 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493682 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493725 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493757 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493813 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493885 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.493921 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.494010 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.494497 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.494537 4867 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.494553 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.494564 4867 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.503566 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.579073 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.579135 4867 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0" exitCode=137 Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.579259 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.586137 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" path="/var/lib/kubelet/pods/19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.587964 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" path="/var/lib/kubelet/pods/331bacc3-9595-492a-9e20-ef8007ccc10a/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.589516 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" path="/var/lib/kubelet/pods/4c3ed719-d8a0-4f47-b0f1-9e635825152a/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.597327 4867 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.601045 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" path="/var/lib/kubelet/pods/6ab404f5-5b14-49d4-80f4-2a84895d0a2f/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.603212 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" path="/var/lib/kubelet/pods/8e8b11fb-b146-4307-b94e-515815b10c58/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.604839 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" path="/var/lib/kubelet/pods/aeb3191f-7e7a-4d94-b913-4f78b379f3e9/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.609442 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf513a52-cfc2-49df-be04-4976f7399901" path="/var/lib/kubelet/pods/bf513a52-cfc2-49df-be04-4976f7399901/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.611111 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.612598 4867 scope.go:117] "RemoveContainer" containerID="6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.632785 4867 scope.go:117] "RemoveContainer" containerID="6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0" Jan 26 11:22:36 crc kubenswrapper[4867]: E0126 11:22:36.633409 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0\": container with ID starting with 6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0 not found: ID does not exist" containerID="6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0" Jan 26 11:22:36 crc kubenswrapper[4867]: I0126 11:22:36.633502 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0"} err="failed to get container status \"6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0\": rpc error: code = NotFound desc = could not find container \"6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0\": container with ID starting with 6841279546c3c0a084f5f637fbc7fb7dd3e507d77e50b879da2876f5159a68d0 not found: ID does not exist" Jan 26 11:22:46 crc kubenswrapper[4867]: I0126 11:22:46.738408 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 11:22:55 crc kubenswrapper[4867]: I0126 11:22:55.039855 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n9jb5"] Jan 26 11:22:56 crc kubenswrapper[4867]: I0126 11:22:56.755845 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9jc4l"] Jan 26 11:22:56 crc kubenswrapper[4867]: I0126 11:22:56.756945 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" podUID="6670fa93-70e2-4047-b449-1bf939336210" containerName="controller-manager" containerID="cri-o://f4e414b6a8d8800939da7c8e93908abd837618d05e40109cc91a71f9a6a53344" gracePeriod=30 Jan 26 11:22:56 crc kubenswrapper[4867]: I0126 11:22:56.850147 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6"] Jan 26 11:22:56 crc kubenswrapper[4867]: I0126 11:22:56.850929 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" podUID="64cfae17-8e43-4fd9-8f7c-2f4996b6351c" containerName="route-controller-manager" containerID="cri-o://55a0a6bf58bee853be5883f07a341ad31aa1a6badbcbf82f853130372ec1794a" gracePeriod=30 Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.718922 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" event={"ID":"6670fa93-70e2-4047-b449-1bf939336210","Type":"ContainerDied","Data":"f4e414b6a8d8800939da7c8e93908abd837618d05e40109cc91a71f9a6a53344"} Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.718787 4867 generic.go:334] "Generic (PLEG): container finished" podID="6670fa93-70e2-4047-b449-1bf939336210" containerID="f4e414b6a8d8800939da7c8e93908abd837618d05e40109cc91a71f9a6a53344" exitCode=0 Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.722514 4867 generic.go:334] "Generic (PLEG): container finished" podID="64cfae17-8e43-4fd9-8f7c-2f4996b6351c" containerID="55a0a6bf58bee853be5883f07a341ad31aa1a6badbcbf82f853130372ec1794a" exitCode=0 Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.722598 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" event={"ID":"64cfae17-8e43-4fd9-8f7c-2f4996b6351c","Type":"ContainerDied","Data":"55a0a6bf58bee853be5883f07a341ad31aa1a6badbcbf82f853130372ec1794a"} Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.751235 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m9tw6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.751307 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" podUID="64cfae17-8e43-4fd9-8f7c-2f4996b6351c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.955602 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.956211 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert\") pod \"6670fa93-70e2-4047-b449-1bf939336210\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.956264 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8pn5\" (UniqueName: \"kubernetes.io/projected/6670fa93-70e2-4047-b449-1bf939336210-kube-api-access-d8pn5\") pod \"6670fa93-70e2-4047-b449-1bf939336210\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.956291 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-config\") pod \"6670fa93-70e2-4047-b449-1bf939336210\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.956308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca\") pod \"6670fa93-70e2-4047-b449-1bf939336210\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.957184 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca" (OuterVolumeSpecName: "client-ca") pod "6670fa93-70e2-4047-b449-1bf939336210" (UID: "6670fa93-70e2-4047-b449-1bf939336210"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.957275 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-config" (OuterVolumeSpecName: "config") pod "6670fa93-70e2-4047-b449-1bf939336210" (UID: "6670fa93-70e2-4047-b449-1bf939336210"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.957894 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6670fa93-70e2-4047-b449-1bf939336210" (UID: "6670fa93-70e2-4047-b449-1bf939336210"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.957929 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-proxy-ca-bundles\") pod \"6670fa93-70e2-4047-b449-1bf939336210\" (UID: \"6670fa93-70e2-4047-b449-1bf939336210\") " Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.958126 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.958139 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.958151 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6670fa93-70e2-4047-b449-1bf939336210-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.964574 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6670fa93-70e2-4047-b449-1bf939336210-kube-api-access-d8pn5" (OuterVolumeSpecName: "kube-api-access-d8pn5") pod "6670fa93-70e2-4047-b449-1bf939336210" (UID: "6670fa93-70e2-4047-b449-1bf939336210"). InnerVolumeSpecName "kube-api-access-d8pn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.965745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6670fa93-70e2-4047-b449-1bf939336210" (UID: "6670fa93-70e2-4047-b449-1bf939336210"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.996670 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-2rzj7"] Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.996957 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.996974 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.996992 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997000 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997011 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997020 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997027 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997036 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997048 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997056 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997067 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997074 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997084 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997090 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997100 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997107 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997114 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997120 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997129 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997137 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997149 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerName="marketplace-operator" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997157 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerName="marketplace-operator" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997167 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997174 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997185 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997192 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997203 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6670fa93-70e2-4047-b449-1bf939336210" containerName="controller-manager" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997213 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6670fa93-70e2-4047-b449-1bf939336210" containerName="controller-manager" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997243 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997251 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997262 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997271 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997280 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997288 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997300 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997308 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997319 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997328 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="extract-content" Jan 26 11:22:58 crc kubenswrapper[4867]: E0126 11:22:58.997336 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997343 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="extract-utilities" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997457 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6670fa93-70e2-4047-b449-1bf939336210" containerName="controller-manager" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997470 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf513a52-cfc2-49df-be04-4976f7399901" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997481 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="331bacc3-9595-492a-9e20-ef8007ccc10a" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997501 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="19cef52a-3ef4-4b1e-a52e-0ba6e01e49b5" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997515 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8b11fb-b146-4307-b94e-515815b10c58" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997527 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3ed719-d8a0-4f47-b0f1-9e635825152a" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997537 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb3191f-7e7a-4d94-b913-4f78b379f3e9" containerName="registry-server" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.997549 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab404f5-5b14-49d4-80f4-2a84895d0a2f" containerName="marketplace-operator" Jan 26 11:22:58 crc kubenswrapper[4867]: I0126 11:22:58.998380 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.009109 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-2rzj7"] Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.027170 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.058916 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-serving-cert\") pod \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.058998 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-client-ca\") pod \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059061 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpjh9\" (UniqueName: \"kubernetes.io/projected/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-kube-api-access-jpjh9\") pod \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059090 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-config\") pod \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\" (UID: \"64cfae17-8e43-4fd9-8f7c-2f4996b6351c\") " Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059422 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-config\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059472 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-proxy-ca-bundles\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059530 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-client-ca\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059557 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac6b8b2-5238-4a01-8740-0208f94df4a7-serving-cert\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcbnm\" (UniqueName: \"kubernetes.io/projected/5ac6b8b2-5238-4a01-8740-0208f94df4a7-kube-api-access-fcbnm\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059694 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6670fa93-70e2-4047-b449-1bf939336210-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.059710 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8pn5\" (UniqueName: \"kubernetes.io/projected/6670fa93-70e2-4047-b449-1bf939336210-kube-api-access-d8pn5\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.060798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-client-ca" (OuterVolumeSpecName: "client-ca") pod "64cfae17-8e43-4fd9-8f7c-2f4996b6351c" (UID: "64cfae17-8e43-4fd9-8f7c-2f4996b6351c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.061774 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-config" (OuterVolumeSpecName: "config") pod "64cfae17-8e43-4fd9-8f7c-2f4996b6351c" (UID: "64cfae17-8e43-4fd9-8f7c-2f4996b6351c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.064400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "64cfae17-8e43-4fd9-8f7c-2f4996b6351c" (UID: "64cfae17-8e43-4fd9-8f7c-2f4996b6351c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.065111 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-kube-api-access-jpjh9" (OuterVolumeSpecName: "kube-api-access-jpjh9") pod "64cfae17-8e43-4fd9-8f7c-2f4996b6351c" (UID: "64cfae17-8e43-4fd9-8f7c-2f4996b6351c"). InnerVolumeSpecName "kube-api-access-jpjh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.159984 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcbnm\" (UniqueName: \"kubernetes.io/projected/5ac6b8b2-5238-4a01-8740-0208f94df4a7-kube-api-access-fcbnm\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160053 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-config\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160089 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-proxy-ca-bundles\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160134 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-client-ca\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160157 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac6b8b2-5238-4a01-8740-0208f94df4a7-serving-cert\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160242 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160441 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160643 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpjh9\" (UniqueName: \"kubernetes.io/projected/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-kube-api-access-jpjh9\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.160677 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64cfae17-8e43-4fd9-8f7c-2f4996b6351c-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.161903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-client-ca\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.162481 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-proxy-ca-bundles\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.163801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-config\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.169922 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac6b8b2-5238-4a01-8740-0208f94df4a7-serving-cert\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.178955 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcbnm\" (UniqueName: \"kubernetes.io/projected/5ac6b8b2-5238-4a01-8740-0208f94df4a7-kube-api-access-fcbnm\") pod \"controller-manager-796947dbf8-2rzj7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.348739 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.575141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-2rzj7"] Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.729787 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" event={"ID":"5ac6b8b2-5238-4a01-8740-0208f94df4a7","Type":"ContainerStarted","Data":"bf5b2917ebb48a5ccddd67545b507a4f73618c5bcf14e7e67088ac53aaf5f06a"} Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.731409 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" event={"ID":"6670fa93-70e2-4047-b449-1bf939336210","Type":"ContainerDied","Data":"3dee0500a17649729de299830f2db44eba3eceb8359430ab5d497f75d0895f48"} Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.731463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9jc4l" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.731551 4867 scope.go:117] "RemoveContainer" containerID="f4e414b6a8d8800939da7c8e93908abd837618d05e40109cc91a71f9a6a53344" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.734983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" event={"ID":"64cfae17-8e43-4fd9-8f7c-2f4996b6351c","Type":"ContainerDied","Data":"293bd1fc7a7e05043daefa98e31930f31b4a55e8d602108287a79654c0f3f34d"} Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.735080 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.765449 4867 scope.go:117] "RemoveContainer" containerID="55a0a6bf58bee853be5883f07a341ad31aa1a6badbcbf82f853130372ec1794a" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.792060 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6"] Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.795860 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m9tw6"] Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.799516 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf"] Jan 26 11:22:59 crc kubenswrapper[4867]: E0126 11:22:59.799796 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cfae17-8e43-4fd9-8f7c-2f4996b6351c" containerName="route-controller-manager" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.799821 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cfae17-8e43-4fd9-8f7c-2f4996b6351c" containerName="route-controller-manager" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.799941 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="64cfae17-8e43-4fd9-8f7c-2f4996b6351c" containerName="route-controller-manager" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.803816 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.808031 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.808492 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.808837 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.808987 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.809105 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.809252 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.814965 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9jc4l"] Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.831011 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9jc4l"] Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.840540 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf"] Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.877227 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-serving-cert\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.877296 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-client-ca\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.877330 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-config\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.877363 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxdg\" (UniqueName: \"kubernetes.io/projected/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-kube-api-access-zfxdg\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.978604 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-serving-cert\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.978666 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-client-ca\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.978704 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-config\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.979132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfxdg\" (UniqueName: \"kubernetes.io/projected/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-kube-api-access-zfxdg\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.979925 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-client-ca\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.980111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-config\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:22:59 crc kubenswrapper[4867]: I0126 11:22:59.987676 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-serving-cert\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.000909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfxdg\" (UniqueName: \"kubernetes.io/projected/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-kube-api-access-zfxdg\") pod \"route-controller-manager-6675f987f6-s84cf\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.124640 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.438344 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf"] Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.570242 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64cfae17-8e43-4fd9-8f7c-2f4996b6351c" path="/var/lib/kubelet/pods/64cfae17-8e43-4fd9-8f7c-2f4996b6351c/volumes" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.571270 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6670fa93-70e2-4047-b449-1bf939336210" path="/var/lib/kubelet/pods/6670fa93-70e2-4047-b449-1bf939336210/volumes" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.745043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" event={"ID":"5ac6b8b2-5238-4a01-8740-0208f94df4a7","Type":"ContainerStarted","Data":"191f46dae64349c07145eff4626eee258cca3d672869d7c4ea90028f2edab549"} Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.745412 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.749831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" event={"ID":"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3","Type":"ContainerStarted","Data":"cd30f7af8fa6a90d04c4844d43711528aaf5bd760e520d02217a732db237854f"} Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.749903 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" event={"ID":"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3","Type":"ContainerStarted","Data":"bfc4b4bc35d580dc6d6468cfec57f94190f6a268059e8fe20650a2c063792d2e"} Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.749932 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.750316 4867 patch_prober.go:28] interesting pod/route-controller-manager-6675f987f6-s84cf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.750377 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" podUID="c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.751925 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.802478 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" podStartSLOduration=3.8024569809999997 podStartE2EDuration="3.802456981s" podCreationTimestamp="2026-01-26 11:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:23:00.77270945 +0000 UTC m=+330.471284380" watchObservedRunningTime="2026-01-26 11:23:00.802456981 +0000 UTC m=+330.501031901" Jan 26 11:23:00 crc kubenswrapper[4867]: I0126 11:23:00.802650 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" podStartSLOduration=3.802645026 podStartE2EDuration="3.802645026s" podCreationTimestamp="2026-01-26 11:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:23:00.802289406 +0000 UTC m=+330.500864306" watchObservedRunningTime="2026-01-26 11:23:00.802645026 +0000 UTC m=+330.501219936" Jan 26 11:23:01 crc kubenswrapper[4867]: I0126 11:23:01.766866 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:23:01 crc kubenswrapper[4867]: I0126 11:23:01.827962 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 11:23:06 crc kubenswrapper[4867]: I0126 11:23:06.294824 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:23:06 crc kubenswrapper[4867]: I0126 11:23:06.295133 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.182589 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqxkw"] Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.185437 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.188093 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.198604 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqxkw"] Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.298408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-utilities\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.298486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8jv\" (UniqueName: \"kubernetes.io/projected/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-kube-api-access-cr8jv\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.298515 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-catalog-content\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.374434 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2hms5"] Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.376527 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.379630 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.384477 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hms5"] Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.400090 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-utilities\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.400717 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8jv\" (UniqueName: \"kubernetes.io/projected/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-kube-api-access-cr8jv\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.400772 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-catalog-content\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.401185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-utilities\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.401523 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-catalog-content\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.432521 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8jv\" (UniqueName: \"kubernetes.io/projected/f0d09d9b-e570-45a9-9511-c95b88f2ffd7-kube-api-access-cr8jv\") pod \"certified-operators-jqxkw\" (UID: \"f0d09d9b-e570-45a9-9511-c95b88f2ffd7\") " pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.502212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/547f161f-485f-4a09-909f-df4f3990046f-catalog-content\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.502539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdpb5\" (UniqueName: \"kubernetes.io/projected/547f161f-485f-4a09-909f-df4f3990046f-kube-api-access-xdpb5\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.502639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/547f161f-485f-4a09-909f-df4f3990046f-utilities\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.511084 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.604056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdpb5\" (UniqueName: \"kubernetes.io/projected/547f161f-485f-4a09-909f-df4f3990046f-kube-api-access-xdpb5\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.604121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/547f161f-485f-4a09-909f-df4f3990046f-utilities\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.604187 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/547f161f-485f-4a09-909f-df4f3990046f-catalog-content\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.604819 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/547f161f-485f-4a09-909f-df4f3990046f-catalog-content\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.604894 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/547f161f-485f-4a09-909f-df4f3990046f-utilities\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.629105 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdpb5\" (UniqueName: \"kubernetes.io/projected/547f161f-485f-4a09-909f-df4f3990046f-kube-api-access-xdpb5\") pod \"community-operators-2hms5\" (UID: \"547f161f-485f-4a09-909f-df4f3990046f\") " pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.698797 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:15 crc kubenswrapper[4867]: I0126 11:23:15.949492 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqxkw"] Jan 26 11:23:15 crc kubenswrapper[4867]: W0126 11:23:15.952542 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0d09d9b_e570_45a9_9511_c95b88f2ffd7.slice/crio-25acc8900c2281a04ef9ef51a637fba6f774418bdb80485de488ca502013bcad WatchSource:0}: Error finding container 25acc8900c2281a04ef9ef51a637fba6f774418bdb80485de488ca502013bcad: Status 404 returned error can't find the container with id 25acc8900c2281a04ef9ef51a637fba6f774418bdb80485de488ca502013bcad Jan 26 11:23:16 crc kubenswrapper[4867]: I0126 11:23:16.169964 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hms5"] Jan 26 11:23:16 crc kubenswrapper[4867]: W0126 11:23:16.174411 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod547f161f_485f_4a09_909f_df4f3990046f.slice/crio-ab2023853c6746145869dafac6c2a527b720ec102177b361669357d8ba9f9b79 WatchSource:0}: Error finding container ab2023853c6746145869dafac6c2a527b720ec102177b361669357d8ba9f9b79: Status 404 returned error can't find the container with id ab2023853c6746145869dafac6c2a527b720ec102177b361669357d8ba9f9b79 Jan 26 11:23:16 crc kubenswrapper[4867]: I0126 11:23:16.852881 4867 generic.go:334] "Generic (PLEG): container finished" podID="547f161f-485f-4a09-909f-df4f3990046f" containerID="b99e140d19c007cca44242c11c3ea075ec12d8a948388b585486708bc9235bad" exitCode=0 Jan 26 11:23:16 crc kubenswrapper[4867]: I0126 11:23:16.853154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hms5" event={"ID":"547f161f-485f-4a09-909f-df4f3990046f","Type":"ContainerDied","Data":"b99e140d19c007cca44242c11c3ea075ec12d8a948388b585486708bc9235bad"} Jan 26 11:23:16 crc kubenswrapper[4867]: I0126 11:23:16.853506 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hms5" event={"ID":"547f161f-485f-4a09-909f-df4f3990046f","Type":"ContainerStarted","Data":"ab2023853c6746145869dafac6c2a527b720ec102177b361669357d8ba9f9b79"} Jan 26 11:23:16 crc kubenswrapper[4867]: I0126 11:23:16.855995 4867 generic.go:334] "Generic (PLEG): container finished" podID="f0d09d9b-e570-45a9-9511-c95b88f2ffd7" containerID="59509046a83f5637db5a2efd801dcba3f049bd13465909869a23d4093fff9dc7" exitCode=0 Jan 26 11:23:16 crc kubenswrapper[4867]: I0126 11:23:16.856060 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqxkw" event={"ID":"f0d09d9b-e570-45a9-9511-c95b88f2ffd7","Type":"ContainerDied","Data":"59509046a83f5637db5a2efd801dcba3f049bd13465909869a23d4093fff9dc7"} Jan 26 11:23:16 crc kubenswrapper[4867]: I0126 11:23:16.856097 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqxkw" event={"ID":"f0d09d9b-e570-45a9-9511-c95b88f2ffd7","Type":"ContainerStarted","Data":"25acc8900c2281a04ef9ef51a637fba6f774418bdb80485de488ca502013bcad"} Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.766160 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m24tl"] Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.767786 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.769685 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.787068 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m24tl"] Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.865091 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hms5" event={"ID":"547f161f-485f-4a09-909f-df4f3990046f","Type":"ContainerStarted","Data":"94696caec21bb9294704da14c25ad57450d717b195be28588d94645551a33816"} Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.867477 4867 generic.go:334] "Generic (PLEG): container finished" podID="f0d09d9b-e570-45a9-9511-c95b88f2ffd7" containerID="c842c2f5e3f17b5310e348888f3ea7c3c0223ec09b73686f388bcd4908a92052" exitCode=0 Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.867538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqxkw" event={"ID":"f0d09d9b-e570-45a9-9511-c95b88f2ffd7","Type":"ContainerDied","Data":"c842c2f5e3f17b5310e348888f3ea7c3c0223ec09b73686f388bcd4908a92052"} Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.945780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59c5f80-bfa1-445a-a552-ef0908b15efd-catalog-content\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.946686 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59c5f80-bfa1-445a-a552-ef0908b15efd-utilities\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.946784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xqr\" (UniqueName: \"kubernetes.io/projected/f59c5f80-bfa1-445a-a552-ef0908b15efd-kube-api-access-m7xqr\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.961163 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p6pt2"] Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.962511 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.965449 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 11:23:17 crc kubenswrapper[4867]: I0126 11:23:17.974845 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6pt2"] Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.048459 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59c5f80-bfa1-445a-a552-ef0908b15efd-catalog-content\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.048535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59c5f80-bfa1-445a-a552-ef0908b15efd-utilities\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.048561 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xqr\" (UniqueName: \"kubernetes.io/projected/f59c5f80-bfa1-445a-a552-ef0908b15efd-kube-api-access-m7xqr\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.049261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f59c5f80-bfa1-445a-a552-ef0908b15efd-catalog-content\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.049371 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f59c5f80-bfa1-445a-a552-ef0908b15efd-utilities\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.072401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xqr\" (UniqueName: \"kubernetes.io/projected/f59c5f80-bfa1-445a-a552-ef0908b15efd-kube-api-access-m7xqr\") pod \"redhat-marketplace-m24tl\" (UID: \"f59c5f80-bfa1-445a-a552-ef0908b15efd\") " pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.082876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.150338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-catalog-content\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.150407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-utilities\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.150488 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhlx\" (UniqueName: \"kubernetes.io/projected/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-kube-api-access-mlhlx\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.252128 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-catalog-content\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.252188 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-utilities\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.252266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlhlx\" (UniqueName: \"kubernetes.io/projected/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-kube-api-access-mlhlx\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.253379 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-catalog-content\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.253486 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-utilities\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.278108 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlhlx\" (UniqueName: \"kubernetes.io/projected/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-kube-api-access-mlhlx\") pod \"redhat-operators-p6pt2\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.309061 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.482622 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m24tl"] Jan 26 11:23:18 crc kubenswrapper[4867]: W0126 11:23:18.489810 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf59c5f80_bfa1_445a_a552_ef0908b15efd.slice/crio-0f579593d455e3060f37d6edb8edece4a0d9003d9c1238e6f5943675c9ee253d WatchSource:0}: Error finding container 0f579593d455e3060f37d6edb8edece4a0d9003d9c1238e6f5943675c9ee253d: Status 404 returned error can't find the container with id 0f579593d455e3060f37d6edb8edece4a0d9003d9c1238e6f5943675c9ee253d Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.743063 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6pt2"] Jan 26 11:23:18 crc kubenswrapper[4867]: W0126 11:23:18.758422 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15505f79_e3ef_4aa2_8f0d_6d6c4b097fc7.slice/crio-187ab09487aff3145821227d5d17ff42a149c1f0acf0c3b3a7dc2c16efd58aa4 WatchSource:0}: Error finding container 187ab09487aff3145821227d5d17ff42a149c1f0acf0c3b3a7dc2c16efd58aa4: Status 404 returned error can't find the container with id 187ab09487aff3145821227d5d17ff42a149c1f0acf0c3b3a7dc2c16efd58aa4 Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.872413 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6pt2" event={"ID":"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7","Type":"ContainerStarted","Data":"187ab09487aff3145821227d5d17ff42a149c1f0acf0c3b3a7dc2c16efd58aa4"} Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.873432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m24tl" event={"ID":"f59c5f80-bfa1-445a-a552-ef0908b15efd","Type":"ContainerStarted","Data":"0f579593d455e3060f37d6edb8edece4a0d9003d9c1238e6f5943675c9ee253d"} Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.875123 4867 generic.go:334] "Generic (PLEG): container finished" podID="547f161f-485f-4a09-909f-df4f3990046f" containerID="94696caec21bb9294704da14c25ad57450d717b195be28588d94645551a33816" exitCode=0 Jan 26 11:23:18 crc kubenswrapper[4867]: I0126 11:23:18.875170 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hms5" event={"ID":"547f161f-485f-4a09-909f-df4f3990046f","Type":"ContainerDied","Data":"94696caec21bb9294704da14c25ad57450d717b195be28588d94645551a33816"} Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.883188 4867 generic.go:334] "Generic (PLEG): container finished" podID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerID="2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43" exitCode=0 Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.883271 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6pt2" event={"ID":"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7","Type":"ContainerDied","Data":"2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43"} Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.885907 4867 generic.go:334] "Generic (PLEG): container finished" podID="f59c5f80-bfa1-445a-a552-ef0908b15efd" containerID="1e6eb31d53f1d1b1287e002a21dd67e42db48e3e34a4343bc33c741af5917805" exitCode=0 Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.886008 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m24tl" event={"ID":"f59c5f80-bfa1-445a-a552-ef0908b15efd","Type":"ContainerDied","Data":"1e6eb31d53f1d1b1287e002a21dd67e42db48e3e34a4343bc33c741af5917805"} Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.888746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hms5" event={"ID":"547f161f-485f-4a09-909f-df4f3990046f","Type":"ContainerStarted","Data":"f0c3cabb4be516e77baad1b10fa678904451402100de9f63717016915daeb1c9"} Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.890812 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqxkw" event={"ID":"f0d09d9b-e570-45a9-9511-c95b88f2ffd7","Type":"ContainerStarted","Data":"eda7cde9e5d5f2321d5ff3a6dff81ab932da00fdb1ed14a118e0f36e16b4b157"} Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.944373 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2hms5" podStartSLOduration=2.361792931 podStartE2EDuration="4.94435299s" podCreationTimestamp="2026-01-26 11:23:15 +0000 UTC" firstStartedPulling="2026-01-26 11:23:16.855484065 +0000 UTC m=+346.554058975" lastFinishedPulling="2026-01-26 11:23:19.438044124 +0000 UTC m=+349.136619034" observedRunningTime="2026-01-26 11:23:19.939629318 +0000 UTC m=+349.638204248" watchObservedRunningTime="2026-01-26 11:23:19.94435299 +0000 UTC m=+349.642927900" Jan 26 11:23:19 crc kubenswrapper[4867]: I0126 11:23:19.960813 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqxkw" podStartSLOduration=2.5291095759999997 podStartE2EDuration="4.960790869s" podCreationTimestamp="2026-01-26 11:23:15 +0000 UTC" firstStartedPulling="2026-01-26 11:23:16.857934424 +0000 UTC m=+346.556509334" lastFinishedPulling="2026-01-26 11:23:19.289615707 +0000 UTC m=+348.988190627" observedRunningTime="2026-01-26 11:23:19.958491565 +0000 UTC m=+349.657066475" watchObservedRunningTime="2026-01-26 11:23:19.960790869 +0000 UTC m=+349.659365779" Jan 26 11:23:20 crc kubenswrapper[4867]: I0126 11:23:20.068596 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" podUID="a91b5a18-2743-473f-8116-5fb1e348d05c" containerName="oauth-openshift" containerID="cri-o://c7845838c24acaade17cb50361911deb057f972e499e1b98631dd4b1b197f346" gracePeriod=15 Jan 26 11:23:20 crc kubenswrapper[4867]: I0126 11:23:20.898166 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6pt2" event={"ID":"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7","Type":"ContainerStarted","Data":"9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3"} Jan 26 11:23:20 crc kubenswrapper[4867]: I0126 11:23:20.900653 4867 generic.go:334] "Generic (PLEG): container finished" podID="f59c5f80-bfa1-445a-a552-ef0908b15efd" containerID="cd73ce85d73b95e733bd449c04339fd783b0a7a926acb2418a641e4563075232" exitCode=0 Jan 26 11:23:20 crc kubenswrapper[4867]: I0126 11:23:20.900721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m24tl" event={"ID":"f59c5f80-bfa1-445a-a552-ef0908b15efd","Type":"ContainerDied","Data":"cd73ce85d73b95e733bd449c04339fd783b0a7a926acb2418a641e4563075232"} Jan 26 11:23:20 crc kubenswrapper[4867]: I0126 11:23:20.902321 4867 generic.go:334] "Generic (PLEG): container finished" podID="a91b5a18-2743-473f-8116-5fb1e348d05c" containerID="c7845838c24acaade17cb50361911deb057f972e499e1b98631dd4b1b197f346" exitCode=0 Jan 26 11:23:20 crc kubenswrapper[4867]: I0126 11:23:20.902536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" event={"ID":"a91b5a18-2743-473f-8116-5fb1e348d05c","Type":"ContainerDied","Data":"c7845838c24acaade17cb50361911deb057f972e499e1b98631dd4b1b197f346"} Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.129173 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.180982 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-f46mn"] Jan 26 11:23:21 crc kubenswrapper[4867]: E0126 11:23:21.181393 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91b5a18-2743-473f-8116-5fb1e348d05c" containerName="oauth-openshift" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.181424 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91b5a18-2743-473f-8116-5fb1e348d05c" containerName="oauth-openshift" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.181531 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91b5a18-2743-473f-8116-5fb1e348d05c" containerName="oauth-openshift" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.181953 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.215196 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-f46mn"] Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294595 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-provider-selection\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294683 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-ocp-branding-template\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294739 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-policies\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294779 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-idp-0-file-data\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-cliconfig\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294898 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjlgh\" (UniqueName: \"kubernetes.io/projected/a91b5a18-2743-473f-8116-5fb1e348d05c-kube-api-access-fjlgh\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294922 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-session\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-service-ca\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.294968 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-router-certs\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-serving-cert\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295046 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-error\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295062 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-dir\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295086 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-login\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-trusted-ca-bundle\") pod \"a91b5a18-2743-473f-8116-5fb1e348d05c\" (UID: \"a91b5a18-2743-473f-8116-5fb1e348d05c\") " Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295297 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295312 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295333 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295360 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295387 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295435 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295461 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295479 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2nnj\" (UniqueName: \"kubernetes.io/projected/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-kube-api-access-r2nnj\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295502 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295528 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-audit-policies\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-audit-dir\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295585 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.295708 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.296041 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.296683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.297322 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.297337 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.301854 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.302239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.308014 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.311471 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91b5a18-2743-473f-8116-5fb1e348d05c-kube-api-access-fjlgh" (OuterVolumeSpecName: "kube-api-access-fjlgh") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "kube-api-access-fjlgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.312396 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.312735 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.313019 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.313292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.313448 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a91b5a18-2743-473f-8116-5fb1e348d05c" (UID: "a91b5a18-2743-473f-8116-5fb1e348d05c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396526 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396583 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396627 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2nnj\" (UniqueName: \"kubernetes.io/projected/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-kube-api-access-r2nnj\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396652 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-audit-policies\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396702 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-audit-dir\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396909 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396921 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396931 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396943 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396954 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396963 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396974 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396983 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.396995 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.397004 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.397014 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.397024 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjlgh\" (UniqueName: \"kubernetes.io/projected/a91b5a18-2743-473f-8116-5fb1e348d05c-kube-api-access-fjlgh\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.397035 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.397045 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a91b5a18-2743-473f-8116-5fb1e348d05c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.397261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-audit-dir\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.398510 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.399870 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.399979 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.402062 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-audit-policies\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.402122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.402318 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.402388 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.402545 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.402918 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.404142 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.407452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.407565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.416645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2nnj\" (UniqueName: \"kubernetes.io/projected/0c16064f-eeb1-4c38-9ee7-ce204745d5f4-kube-api-access-r2nnj\") pod \"oauth-openshift-56c495df99-f46mn\" (UID: \"0c16064f-eeb1-4c38-9ee7-ce204745d5f4\") " pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.496799 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.923159 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" event={"ID":"a91b5a18-2743-473f-8116-5fb1e348d05c","Type":"ContainerDied","Data":"c18ae9af67d0cc42c62ef574e6a2e36b13d8eb0f61cdd8bdda55e082663a33a4"} Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.923254 4867 scope.go:117] "RemoveContainer" containerID="c7845838c24acaade17cb50361911deb057f972e499e1b98631dd4b1b197f346" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.923398 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n9jb5" Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.978494 4867 generic.go:334] "Generic (PLEG): container finished" podID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerID="9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3" exitCode=0 Jan 26 11:23:21 crc kubenswrapper[4867]: I0126 11:23:21.978580 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6pt2" event={"ID":"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7","Type":"ContainerDied","Data":"9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3"} Jan 26 11:23:22 crc kubenswrapper[4867]: I0126 11:23:22.049351 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n9jb5"] Jan 26 11:23:22 crc kubenswrapper[4867]: I0126 11:23:22.056193 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n9jb5"] Jan 26 11:23:22 crc kubenswrapper[4867]: I0126 11:23:22.223856 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-f46mn"] Jan 26 11:23:22 crc kubenswrapper[4867]: W0126 11:23:22.234386 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c16064f_eeb1_4c38_9ee7_ce204745d5f4.slice/crio-4b8af6a96079372d8a082ac7630b15e75816a6c5590c516e7f6f289cb87b9892 WatchSource:0}: Error finding container 4b8af6a96079372d8a082ac7630b15e75816a6c5590c516e7f6f289cb87b9892: Status 404 returned error can't find the container with id 4b8af6a96079372d8a082ac7630b15e75816a6c5590c516e7f6f289cb87b9892 Jan 26 11:23:22 crc kubenswrapper[4867]: I0126 11:23:22.572798 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91b5a18-2743-473f-8116-5fb1e348d05c" path="/var/lib/kubelet/pods/a91b5a18-2743-473f-8116-5fb1e348d05c/volumes" Jan 26 11:23:22 crc kubenswrapper[4867]: I0126 11:23:22.986351 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" event={"ID":"0c16064f-eeb1-4c38-9ee7-ce204745d5f4","Type":"ContainerStarted","Data":"4b8af6a96079372d8a082ac7630b15e75816a6c5590c516e7f6f289cb87b9892"} Jan 26 11:23:22 crc kubenswrapper[4867]: I0126 11:23:22.988587 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m24tl" event={"ID":"f59c5f80-bfa1-445a-a552-ef0908b15efd","Type":"ContainerStarted","Data":"9066de3241e6dcf90e161dd7ea0430afade4c9fcdb3f569b563b0998909f2428"} Jan 26 11:23:23 crc kubenswrapper[4867]: I0126 11:23:23.010258 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m24tl" podStartSLOduration=3.8934690180000002 podStartE2EDuration="6.010214112s" podCreationTimestamp="2026-01-26 11:23:17 +0000 UTC" firstStartedPulling="2026-01-26 11:23:19.887193863 +0000 UTC m=+349.585768773" lastFinishedPulling="2026-01-26 11:23:22.003938957 +0000 UTC m=+351.702513867" observedRunningTime="2026-01-26 11:23:23.009145512 +0000 UTC m=+352.707720432" watchObservedRunningTime="2026-01-26 11:23:23.010214112 +0000 UTC m=+352.708789022" Jan 26 11:23:24 crc kubenswrapper[4867]: I0126 11:23:24.013957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" event={"ID":"0c16064f-eeb1-4c38-9ee7-ce204745d5f4","Type":"ContainerStarted","Data":"df3f2cc70688e1d46a91eb965764c6ecc598b72990d0533ac98a74f6acd14940"} Jan 26 11:23:25 crc kubenswrapper[4867]: I0126 11:23:25.512181 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:25 crc kubenswrapper[4867]: I0126 11:23:25.512707 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:25 crc kubenswrapper[4867]: I0126 11:23:25.603434 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:25 crc kubenswrapper[4867]: I0126 11:23:25.699655 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:25 crc kubenswrapper[4867]: I0126 11:23:25.699710 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:25 crc kubenswrapper[4867]: I0126 11:23:25.785458 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:26 crc kubenswrapper[4867]: I0126 11:23:26.030249 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:26 crc kubenswrapper[4867]: I0126 11:23:26.037466 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" Jan 26 11:23:26 crc kubenswrapper[4867]: I0126 11:23:26.078816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqxkw" Jan 26 11:23:26 crc kubenswrapper[4867]: I0126 11:23:26.082036 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c495df99-f46mn" podStartSLOduration=31.082011019 podStartE2EDuration="31.082011019s" podCreationTimestamp="2026-01-26 11:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:23:26.055500829 +0000 UTC m=+355.754075769" watchObservedRunningTime="2026-01-26 11:23:26.082011019 +0000 UTC m=+355.780585929" Jan 26 11:23:26 crc kubenswrapper[4867]: I0126 11:23:26.103090 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2hms5" Jan 26 11:23:28 crc kubenswrapper[4867]: I0126 11:23:28.046727 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6pt2" event={"ID":"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7","Type":"ContainerStarted","Data":"88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a"} Jan 26 11:23:28 crc kubenswrapper[4867]: I0126 11:23:28.083425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:28 crc kubenswrapper[4867]: I0126 11:23:28.083490 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:28 crc kubenswrapper[4867]: I0126 11:23:28.167987 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:29 crc kubenswrapper[4867]: I0126 11:23:29.134129 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m24tl" Jan 26 11:23:30 crc kubenswrapper[4867]: I0126 11:23:30.079899 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p6pt2" podStartSLOduration=7.099830666 podStartE2EDuration="13.079879462s" podCreationTimestamp="2026-01-26 11:23:17 +0000 UTC" firstStartedPulling="2026-01-26 11:23:19.885076884 +0000 UTC m=+349.583651794" lastFinishedPulling="2026-01-26 11:23:25.86512568 +0000 UTC m=+355.563700590" observedRunningTime="2026-01-26 11:23:30.078697029 +0000 UTC m=+359.777271949" watchObservedRunningTime="2026-01-26 11:23:30.079879462 +0000 UTC m=+359.778454362" Jan 26 11:23:36 crc kubenswrapper[4867]: I0126 11:23:36.294495 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:23:36 crc kubenswrapper[4867]: I0126 11:23:36.294929 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:23:38 crc kubenswrapper[4867]: I0126 11:23:38.309979 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:38 crc kubenswrapper[4867]: I0126 11:23:38.310082 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:38 crc kubenswrapper[4867]: I0126 11:23:38.363609 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:39 crc kubenswrapper[4867]: I0126 11:23:39.152422 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 11:23:56 crc kubenswrapper[4867]: I0126 11:23:56.799371 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-2rzj7"] Jan 26 11:23:56 crc kubenswrapper[4867]: I0126 11:23:56.800350 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" podUID="5ac6b8b2-5238-4a01-8740-0208f94df4a7" containerName="controller-manager" containerID="cri-o://191f46dae64349c07145eff4626eee258cca3d672869d7c4ea90028f2edab549" gracePeriod=30 Jan 26 11:23:56 crc kubenswrapper[4867]: I0126 11:23:56.805891 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf"] Jan 26 11:23:56 crc kubenswrapper[4867]: I0126 11:23:56.806570 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" podUID="c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" containerName="route-controller-manager" containerID="cri-o://cd30f7af8fa6a90d04c4844d43711528aaf5bd760e520d02217a732db237854f" gracePeriod=30 Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.623593 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ac6b8b2-5238-4a01-8740-0208f94df4a7" containerID="191f46dae64349c07145eff4626eee258cca3d672869d7c4ea90028f2edab549" exitCode=0 Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.623841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" event={"ID":"5ac6b8b2-5238-4a01-8740-0208f94df4a7","Type":"ContainerDied","Data":"191f46dae64349c07145eff4626eee258cca3d672869d7c4ea90028f2edab549"} Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.627098 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" containerID="cd30f7af8fa6a90d04c4844d43711528aaf5bd760e520d02217a732db237854f" exitCode=0 Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.627145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" event={"ID":"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3","Type":"ContainerDied","Data":"cd30f7af8fa6a90d04c4844d43711528aaf5bd760e520d02217a732db237854f"} Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.837945 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.845183 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907349 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-client-ca\") pod \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907415 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfxdg\" (UniqueName: \"kubernetes.io/projected/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-kube-api-access-zfxdg\") pod \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-proxy-ca-bundles\") pod \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907485 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-config\") pod \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907519 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-serving-cert\") pod \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcbnm\" (UniqueName: \"kubernetes.io/projected/5ac6b8b2-5238-4a01-8740-0208f94df4a7-kube-api-access-fcbnm\") pod \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907588 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-client-ca\") pod \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\" (UID: \"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-config\") pod \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.907676 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac6b8b2-5238-4a01-8740-0208f94df4a7-serving-cert\") pod \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\" (UID: \"5ac6b8b2-5238-4a01-8740-0208f94df4a7\") " Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.908253 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ac6b8b2-5238-4a01-8740-0208f94df4a7" (UID: "5ac6b8b2-5238-4a01-8740-0208f94df4a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.908265 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5ac6b8b2-5238-4a01-8740-0208f94df4a7" (UID: "5ac6b8b2-5238-4a01-8740-0208f94df4a7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.908880 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" (UID: "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.908963 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-config" (OuterVolumeSpecName: "config") pod "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" (UID: "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.909413 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-config" (OuterVolumeSpecName: "config") pod "5ac6b8b2-5238-4a01-8740-0208f94df4a7" (UID: "5ac6b8b2-5238-4a01-8740-0208f94df4a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.914329 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" (UID: "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.914342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac6b8b2-5238-4a01-8740-0208f94df4a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ac6b8b2-5238-4a01-8740-0208f94df4a7" (UID: "5ac6b8b2-5238-4a01-8740-0208f94df4a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.915354 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-kube-api-access-zfxdg" (OuterVolumeSpecName: "kube-api-access-zfxdg") pod "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" (UID: "c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3"). InnerVolumeSpecName "kube-api-access-zfxdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:23:57 crc kubenswrapper[4867]: I0126 11:23:57.916955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac6b8b2-5238-4a01-8740-0208f94df4a7-kube-api-access-fcbnm" (OuterVolumeSpecName: "kube-api-access-fcbnm") pod "5ac6b8b2-5238-4a01-8740-0208f94df4a7" (UID: "5ac6b8b2-5238-4a01-8740-0208f94df4a7"). InnerVolumeSpecName "kube-api-access-fcbnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009384 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcbnm\" (UniqueName: \"kubernetes.io/projected/5ac6b8b2-5238-4a01-8740-0208f94df4a7-kube-api-access-fcbnm\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009442 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009455 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009467 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac6b8b2-5238-4a01-8740-0208f94df4a7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009479 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009492 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfxdg\" (UniqueName: \"kubernetes.io/projected/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-kube-api-access-zfxdg\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009502 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac6b8b2-5238-4a01-8740-0208f94df4a7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009512 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.009523 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.636791 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" event={"ID":"5ac6b8b2-5238-4a01-8740-0208f94df4a7","Type":"ContainerDied","Data":"bf5b2917ebb48a5ccddd67545b507a4f73618c5bcf14e7e67088ac53aaf5f06a"} Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.636875 4867 scope.go:117] "RemoveContainer" containerID="191f46dae64349c07145eff4626eee258cca3d672869d7c4ea90028f2edab549" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.636830 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-2rzj7" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.638902 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" event={"ID":"c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3","Type":"ContainerDied","Data":"bfc4b4bc35d580dc6d6468cfec57f94190f6a268059e8fe20650a2c063792d2e"} Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.639025 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.659428 4867 scope.go:117] "RemoveContainer" containerID="cd30f7af8fa6a90d04c4844d43711528aaf5bd760e520d02217a732db237854f" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.673630 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf"] Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.677716 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6675f987f6-s84cf"] Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.681122 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-2rzj7"] Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.683992 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-2rzj7"] Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.914187 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d"] Jan 26 11:23:58 crc kubenswrapper[4867]: E0126 11:23:58.916159 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" containerName="route-controller-manager" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.916385 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" containerName="route-controller-manager" Jan 26 11:23:58 crc kubenswrapper[4867]: E0126 11:23:58.925140 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac6b8b2-5238-4a01-8740-0208f94df4a7" containerName="controller-manager" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.925400 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac6b8b2-5238-4a01-8740-0208f94df4a7" containerName="controller-manager" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.927793 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" containerName="route-controller-manager" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.927997 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac6b8b2-5238-4a01-8740-0208f94df4a7" containerName="controller-manager" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.929628 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.935649 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk"] Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.936523 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.938406 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.938514 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.938613 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.938695 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.939644 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d"] Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.942742 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.945022 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.945409 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.945560 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.945734 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.945987 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.946260 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.946423 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.953592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk"] Jan 26 11:23:58 crc kubenswrapper[4867]: I0126 11:23:58.954549 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.026845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-client-ca\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027171 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4g2\" (UniqueName: \"kubernetes.io/projected/452c834d-7870-4122-a01a-7dd35abd2f3a-kube-api-access-mk4g2\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027349 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-config\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027438 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/452c834d-7870-4122-a01a-7dd35abd2f3a-client-ca\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/452c834d-7870-4122-a01a-7dd35abd2f3a-serving-cert\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027627 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6zr\" (UniqueName: \"kubernetes.io/projected/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-kube-api-access-ms6zr\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-serving-cert\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027816 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-proxy-ca-bundles\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.027927 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/452c834d-7870-4122-a01a-7dd35abd2f3a-config\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129741 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-serving-cert\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129805 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-proxy-ca-bundles\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129850 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/452c834d-7870-4122-a01a-7dd35abd2f3a-config\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129878 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-client-ca\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129902 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk4g2\" (UniqueName: \"kubernetes.io/projected/452c834d-7870-4122-a01a-7dd35abd2f3a-kube-api-access-mk4g2\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-config\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/452c834d-7870-4122-a01a-7dd35abd2f3a-client-ca\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.129992 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/452c834d-7870-4122-a01a-7dd35abd2f3a-serving-cert\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.130007 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6zr\" (UniqueName: \"kubernetes.io/projected/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-kube-api-access-ms6zr\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.131651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-proxy-ca-bundles\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.132889 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-client-ca\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.132904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/452c834d-7870-4122-a01a-7dd35abd2f3a-client-ca\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.133754 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/452c834d-7870-4122-a01a-7dd35abd2f3a-config\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.145133 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/452c834d-7870-4122-a01a-7dd35abd2f3a-serving-cert\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.146000 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-serving-cert\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.151265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-config\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.170074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk4g2\" (UniqueName: \"kubernetes.io/projected/452c834d-7870-4122-a01a-7dd35abd2f3a-kube-api-access-mk4g2\") pod \"route-controller-manager-559c7fdb59-m4chk\" (UID: \"452c834d-7870-4122-a01a-7dd35abd2f3a\") " pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.172475 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6zr\" (UniqueName: \"kubernetes.io/projected/946430c8-7dc4-49ec-9ee5-ce6a426ea2d8-kube-api-access-ms6zr\") pod \"controller-manager-7ff6c9d85-nhp5d\" (UID: \"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8\") " pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.267381 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.283550 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.503729 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d"] Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.562736 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk"] Jan 26 11:23:59 crc kubenswrapper[4867]: W0126 11:23:59.568173 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452c834d_7870_4122_a01a_7dd35abd2f3a.slice/crio-55e27bf85d2b9d1e64714ceecb5a27adec8c060b8a4631d4a0276e690c9dea47 WatchSource:0}: Error finding container 55e27bf85d2b9d1e64714ceecb5a27adec8c060b8a4631d4a0276e690c9dea47: Status 404 returned error can't find the container with id 55e27bf85d2b9d1e64714ceecb5a27adec8c060b8a4631d4a0276e690c9dea47 Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.644805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" event={"ID":"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8","Type":"ContainerStarted","Data":"d52ef991d43bc8ae307aafe9a0e6e7b91df27bf54c58e401fe893769b0b07d1a"} Jan 26 11:23:59 crc kubenswrapper[4867]: I0126 11:23:59.647593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" event={"ID":"452c834d-7870-4122-a01a-7dd35abd2f3a","Type":"ContainerStarted","Data":"55e27bf85d2b9d1e64714ceecb5a27adec8c060b8a4631d4a0276e690c9dea47"} Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.373705 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p6b5s"] Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.374707 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.428335 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p6b5s"] Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f18552-3949-4784-a939-dff3039b222f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465516 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-registry-tls\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465574 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f18552-3949-4784-a939-dff3039b222f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspdw\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-kube-api-access-cspdw\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f18552-3949-4784-a939-dff3039b222f-registry-certificates\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465856 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-bound-sa-token\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.465983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f18552-3949-4784-a939-dff3039b222f-trusted-ca\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.490917 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.568201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f18552-3949-4784-a939-dff3039b222f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.568287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-registry-tls\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.568333 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f18552-3949-4784-a939-dff3039b222f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.568374 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cspdw\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-kube-api-access-cspdw\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.568414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f18552-3949-4784-a939-dff3039b222f-registry-certificates\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.568462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-bound-sa-token\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.568488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f18552-3949-4784-a939-dff3039b222f-trusted-ca\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.569453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4f18552-3949-4784-a939-dff3039b222f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.570104 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4f18552-3949-4784-a939-dff3039b222f-registry-certificates\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.570394 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f18552-3949-4784-a939-dff3039b222f-trusted-ca\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.574622 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac6b8b2-5238-4a01-8740-0208f94df4a7" path="/var/lib/kubelet/pods/5ac6b8b2-5238-4a01-8740-0208f94df4a7/volumes" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.575341 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3" path="/var/lib/kubelet/pods/c8497c98-2d25-41e0-bf50-cb7a0ad8d8f3/volumes" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.576506 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-registry-tls\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.576760 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4f18552-3949-4784-a939-dff3039b222f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.592594 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-bound-sa-token\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.601031 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspdw\" (UniqueName: \"kubernetes.io/projected/c4f18552-3949-4784-a939-dff3039b222f-kube-api-access-cspdw\") pod \"image-registry-66df7c8f76-p6b5s\" (UID: \"c4f18552-3949-4784-a939-dff3039b222f\") " pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.655469 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" event={"ID":"452c834d-7870-4122-a01a-7dd35abd2f3a","Type":"ContainerStarted","Data":"71efb17d4b30f40d0ac5d2e69d6805e14f5e161ea1e5d65951cc0679634b17e2"} Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.658716 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.661659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" event={"ID":"946430c8-7dc4-49ec-9ee5-ce6a426ea2d8","Type":"ContainerStarted","Data":"35d6e1ed454c181ff2354628f315fd5c3c625d976a5175f57e86178b182d2381"} Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.662050 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.662313 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.666956 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.677185 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-559c7fdb59-m4chk" podStartSLOduration=4.677158754 podStartE2EDuration="4.677158754s" podCreationTimestamp="2026-01-26 11:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:24:00.675912198 +0000 UTC m=+390.374487128" watchObservedRunningTime="2026-01-26 11:24:00.677158754 +0000 UTC m=+390.375733684" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.691826 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.700809 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7ff6c9d85-nhp5d" podStartSLOduration=4.700778303 podStartE2EDuration="4.700778303s" podCreationTimestamp="2026-01-26 11:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:24:00.692836321 +0000 UTC m=+390.391411241" watchObservedRunningTime="2026-01-26 11:24:00.700778303 +0000 UTC m=+390.399353223" Jan 26 11:24:00 crc kubenswrapper[4867]: I0126 11:24:00.925299 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p6b5s"] Jan 26 11:24:00 crc kubenswrapper[4867]: W0126 11:24:00.932517 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f18552_3949_4784_a939_dff3039b222f.slice/crio-c1654c897d4443aca26d214655fe03275dd6becdc8a81fd3a1810e36a04e2f79 WatchSource:0}: Error finding container c1654c897d4443aca26d214655fe03275dd6becdc8a81fd3a1810e36a04e2f79: Status 404 returned error can't find the container with id c1654c897d4443aca26d214655fe03275dd6becdc8a81fd3a1810e36a04e2f79 Jan 26 11:24:01 crc kubenswrapper[4867]: I0126 11:24:01.671766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" event={"ID":"c4f18552-3949-4784-a939-dff3039b222f","Type":"ContainerStarted","Data":"c1654c897d4443aca26d214655fe03275dd6becdc8a81fd3a1810e36a04e2f79"} Jan 26 11:24:02 crc kubenswrapper[4867]: I0126 11:24:02.680035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" event={"ID":"c4f18552-3949-4784-a939-dff3039b222f","Type":"ContainerStarted","Data":"70e1e18625391b1f4e34efc47fb4a05794f35f3185fa995ebd33b14bac4b673b"} Jan 26 11:24:02 crc kubenswrapper[4867]: I0126 11:24:02.680679 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:02 crc kubenswrapper[4867]: I0126 11:24:02.710715 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" podStartSLOduration=2.71068243 podStartE2EDuration="2.71068243s" podCreationTimestamp="2026-01-26 11:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:24:02.708568601 +0000 UTC m=+392.407143561" watchObservedRunningTime="2026-01-26 11:24:02.71068243 +0000 UTC m=+392.409257380" Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.294096 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.294815 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.294890 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.295751 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6f4649a5a6ff9f987b90727334fbb91d637d6ff3f79120bcd4b01a76eef1fb9"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.295820 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://f6f4649a5a6ff9f987b90727334fbb91d637d6ff3f79120bcd4b01a76eef1fb9" gracePeriod=600 Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.709025 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="f6f4649a5a6ff9f987b90727334fbb91d637d6ff3f79120bcd4b01a76eef1fb9" exitCode=0 Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.709084 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"f6f4649a5a6ff9f987b90727334fbb91d637d6ff3f79120bcd4b01a76eef1fb9"} Jan 26 11:24:06 crc kubenswrapper[4867]: I0126 11:24:06.709127 4867 scope.go:117] "RemoveContainer" containerID="a97a794991491a7b7ac8c0d20d0fa2f4f36c8ba882962b5c203933431d324105" Jan 26 11:24:07 crc kubenswrapper[4867]: I0126 11:24:07.724826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"0fe8ca3d314e4d17df3b97806d9aca627e634096754401de141f98cba0b737ca"} Jan 26 11:24:20 crc kubenswrapper[4867]: I0126 11:24:20.698973 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-p6b5s" Jan 26 11:24:20 crc kubenswrapper[4867]: I0126 11:24:20.800629 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-skdxp"] Jan 26 11:24:45 crc kubenswrapper[4867]: I0126 11:24:45.857900 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" podUID="b3348ed5-3007-4ff3-b77d-ecb758f238df" containerName="registry" containerID="cri-o://d65fed487f8872774ff9062bdfbd8def8c0c8b8df10dfd3e8160b8df411cdb9b" gracePeriod=30 Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.062922 4867 generic.go:334] "Generic (PLEG): container finished" podID="b3348ed5-3007-4ff3-b77d-ecb758f238df" containerID="d65fed487f8872774ff9062bdfbd8def8c0c8b8df10dfd3e8160b8df411cdb9b" exitCode=0 Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.063005 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" event={"ID":"b3348ed5-3007-4ff3-b77d-ecb758f238df","Type":"ContainerDied","Data":"d65fed487f8872774ff9062bdfbd8def8c0c8b8df10dfd3e8160b8df411cdb9b"} Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.677626 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.816971 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-certificates\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.817026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-544gt\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-kube-api-access-544gt\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.817062 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3348ed5-3007-4ff3-b77d-ecb758f238df-installation-pull-secrets\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.817148 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-bound-sa-token\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.817172 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-tls\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.817189 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-trusted-ca\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.817258 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3348ed5-3007-4ff3-b77d-ecb758f238df-ca-trust-extracted\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.817656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b3348ed5-3007-4ff3-b77d-ecb758f238df\" (UID: \"b3348ed5-3007-4ff3-b77d-ecb758f238df\") " Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.818324 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.818503 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.826500 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-kube-api-access-544gt" (OuterVolumeSpecName: "kube-api-access-544gt") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "kube-api-access-544gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.827492 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3348ed5-3007-4ff3-b77d-ecb758f238df-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.828302 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.828335 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-544gt\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-kube-api-access-544gt\") on node \"crc\" DevicePath \"\"" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.828347 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3348ed5-3007-4ff3-b77d-ecb758f238df-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.828362 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3348ed5-3007-4ff3-b77d-ecb758f238df-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.828716 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.832857 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.833645 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3348ed5-3007-4ff3-b77d-ecb758f238df-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.837369 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b3348ed5-3007-4ff3-b77d-ecb758f238df" (UID: "b3348ed5-3007-4ff3-b77d-ecb758f238df"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.929837 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.930101 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3348ed5-3007-4ff3-b77d-ecb758f238df-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:24:46 crc kubenswrapper[4867]: I0126 11:24:46.930121 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3348ed5-3007-4ff3-b77d-ecb758f238df-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 11:24:47 crc kubenswrapper[4867]: I0126 11:24:47.070596 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" event={"ID":"b3348ed5-3007-4ff3-b77d-ecb758f238df","Type":"ContainerDied","Data":"c055283b690ddf9009e8d64db314c99237e22f6f60bea0ba4c50fb7d893bffa2"} Jan 26 11:24:47 crc kubenswrapper[4867]: I0126 11:24:47.070674 4867 scope.go:117] "RemoveContainer" containerID="d65fed487f8872774ff9062bdfbd8def8c0c8b8df10dfd3e8160b8df411cdb9b" Jan 26 11:24:47 crc kubenswrapper[4867]: I0126 11:24:47.070698 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-skdxp" Jan 26 11:24:47 crc kubenswrapper[4867]: I0126 11:24:47.117407 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-skdxp"] Jan 26 11:24:47 crc kubenswrapper[4867]: I0126 11:24:47.121468 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-skdxp"] Jan 26 11:24:48 crc kubenswrapper[4867]: I0126 11:24:48.572617 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3348ed5-3007-4ff3-b77d-ecb758f238df" path="/var/lib/kubelet/pods/b3348ed5-3007-4ff3-b77d-ecb758f238df/volumes" Jan 26 11:26:06 crc kubenswrapper[4867]: I0126 11:26:06.294139 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:26:06 crc kubenswrapper[4867]: I0126 11:26:06.294985 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:26:36 crc kubenswrapper[4867]: I0126 11:26:36.294106 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:26:36 crc kubenswrapper[4867]: I0126 11:26:36.295054 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.294174 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.295028 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.295087 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.295786 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fe8ca3d314e4d17df3b97806d9aca627e634096754401de141f98cba0b737ca"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.295840 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://0fe8ca3d314e4d17df3b97806d9aca627e634096754401de141f98cba0b737ca" gracePeriod=600 Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.940053 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="0fe8ca3d314e4d17df3b97806d9aca627e634096754401de141f98cba0b737ca" exitCode=0 Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.940117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"0fe8ca3d314e4d17df3b97806d9aca627e634096754401de141f98cba0b737ca"} Jan 26 11:27:06 crc kubenswrapper[4867]: I0126 11:27:06.940721 4867 scope.go:117] "RemoveContainer" containerID="f6f4649a5a6ff9f987b90727334fbb91d637d6ff3f79120bcd4b01a76eef1fb9" Jan 26 11:27:07 crc kubenswrapper[4867]: I0126 11:27:07.953574 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"3d80268128b8588b5243ae8da874837feaca71a462cb1a50fe2432786b4b83de"} Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.651602 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv"] Jan 26 11:28:36 crc kubenswrapper[4867]: E0126 11:28:36.652797 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3348ed5-3007-4ff3-b77d-ecb758f238df" containerName="registry" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.652820 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3348ed5-3007-4ff3-b77d-ecb758f238df" containerName="registry" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.652938 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3348ed5-3007-4ff3-b77d-ecb758f238df" containerName="registry" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.653542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.659495 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-725rt" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.659777 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.659859 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.664117 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2k86r"] Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.665154 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2k86r" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.672172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5cg8s" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.676687 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv"] Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.681899 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2k86r"] Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.697025 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rptrs"] Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.697748 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.699641 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xftsn" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.719003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rptrs"] Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.743345 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgzw\" (UniqueName: \"kubernetes.io/projected/a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a-kube-api-access-jsgzw\") pod \"cert-manager-858654f9db-2k86r\" (UID: \"a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a\") " pod="cert-manager/cert-manager-858654f9db-2k86r" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.743506 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jv5j\" (UniqueName: \"kubernetes.io/projected/5f8ac213-4e48-43fd-9cd3-47c1cf8102f2-kube-api-access-8jv5j\") pod \"cert-manager-cainjector-cf98fcc89-tv8pv\" (UID: \"5f8ac213-4e48-43fd-9cd3-47c1cf8102f2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.743545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szcp7\" (UniqueName: \"kubernetes.io/projected/71abfae8-23ae-4ab8-9840-8c34abcbac6a-kube-api-access-szcp7\") pod \"cert-manager-webhook-687f57d79b-rptrs\" (UID: \"71abfae8-23ae-4ab8-9840-8c34abcbac6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.844907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jv5j\" (UniqueName: \"kubernetes.io/projected/5f8ac213-4e48-43fd-9cd3-47c1cf8102f2-kube-api-access-8jv5j\") pod \"cert-manager-cainjector-cf98fcc89-tv8pv\" (UID: \"5f8ac213-4e48-43fd-9cd3-47c1cf8102f2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.844966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szcp7\" (UniqueName: \"kubernetes.io/projected/71abfae8-23ae-4ab8-9840-8c34abcbac6a-kube-api-access-szcp7\") pod \"cert-manager-webhook-687f57d79b-rptrs\" (UID: \"71abfae8-23ae-4ab8-9840-8c34abcbac6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.845006 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsgzw\" (UniqueName: \"kubernetes.io/projected/a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a-kube-api-access-jsgzw\") pod \"cert-manager-858654f9db-2k86r\" (UID: \"a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a\") " pod="cert-manager/cert-manager-858654f9db-2k86r" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.866027 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsgzw\" (UniqueName: \"kubernetes.io/projected/a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a-kube-api-access-jsgzw\") pod \"cert-manager-858654f9db-2k86r\" (UID: \"a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a\") " pod="cert-manager/cert-manager-858654f9db-2k86r" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.866698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jv5j\" (UniqueName: \"kubernetes.io/projected/5f8ac213-4e48-43fd-9cd3-47c1cf8102f2-kube-api-access-8jv5j\") pod \"cert-manager-cainjector-cf98fcc89-tv8pv\" (UID: \"5f8ac213-4e48-43fd-9cd3-47c1cf8102f2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.871095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szcp7\" (UniqueName: \"kubernetes.io/projected/71abfae8-23ae-4ab8-9840-8c34abcbac6a-kube-api-access-szcp7\") pod \"cert-manager-webhook-687f57d79b-rptrs\" (UID: \"71abfae8-23ae-4ab8-9840-8c34abcbac6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.980265 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" Jan 26 11:28:36 crc kubenswrapper[4867]: I0126 11:28:36.987516 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2k86r" Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.015729 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.242872 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv"] Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.248967 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.278751 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2k86r"] Jan 26 11:28:37 crc kubenswrapper[4867]: W0126 11:28:37.286435 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1f3bf88_009a_4dc4_9e17_c3ab0ae08c6a.slice/crio-fea0fddb355928117fdfbbc4521b7ffee8e9f8b4258d1ea167e2596919e82097 WatchSource:0}: Error finding container fea0fddb355928117fdfbbc4521b7ffee8e9f8b4258d1ea167e2596919e82097: Status 404 returned error can't find the container with id fea0fddb355928117fdfbbc4521b7ffee8e9f8b4258d1ea167e2596919e82097 Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.308214 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rptrs"] Jan 26 11:28:37 crc kubenswrapper[4867]: W0126 11:28:37.311478 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71abfae8_23ae_4ab8_9840_8c34abcbac6a.slice/crio-93b05d9382642fa8ed58907e7fc29e3db74914ec1e72a050b598e119ddcb02ad WatchSource:0}: Error finding container 93b05d9382642fa8ed58907e7fc29e3db74914ec1e72a050b598e119ddcb02ad: Status 404 returned error can't find the container with id 93b05d9382642fa8ed58907e7fc29e3db74914ec1e72a050b598e119ddcb02ad Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.608273 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" event={"ID":"71abfae8-23ae-4ab8-9840-8c34abcbac6a","Type":"ContainerStarted","Data":"93b05d9382642fa8ed58907e7fc29e3db74914ec1e72a050b598e119ddcb02ad"} Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.609104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" event={"ID":"5f8ac213-4e48-43fd-9cd3-47c1cf8102f2","Type":"ContainerStarted","Data":"f8e5dd78127e5b521f7e0aa8e769a88bcf4dea065fcd00cc9f110cb7504d7dc6"} Jan 26 11:28:37 crc kubenswrapper[4867]: I0126 11:28:37.610804 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2k86r" event={"ID":"a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a","Type":"ContainerStarted","Data":"fea0fddb355928117fdfbbc4521b7ffee8e9f8b4258d1ea167e2596919e82097"} Jan 26 11:28:40 crc kubenswrapper[4867]: I0126 11:28:40.629439 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" event={"ID":"5f8ac213-4e48-43fd-9cd3-47c1cf8102f2","Type":"ContainerStarted","Data":"64660f4a2280fc1abbb1e7be6e3919d49cdb735025490fa8ff5d46401eda8eb2"} Jan 26 11:28:40 crc kubenswrapper[4867]: I0126 11:28:40.651412 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tv8pv" podStartSLOduration=2.32223055 podStartE2EDuration="4.651386681s" podCreationTimestamp="2026-01-26 11:28:36 +0000 UTC" firstStartedPulling="2026-01-26 11:28:37.248713802 +0000 UTC m=+666.947288702" lastFinishedPulling="2026-01-26 11:28:39.577869903 +0000 UTC m=+669.276444833" observedRunningTime="2026-01-26 11:28:40.646921899 +0000 UTC m=+670.345496809" watchObservedRunningTime="2026-01-26 11:28:40.651386681 +0000 UTC m=+670.349961591" Jan 26 11:28:42 crc kubenswrapper[4867]: I0126 11:28:42.644371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" event={"ID":"71abfae8-23ae-4ab8-9840-8c34abcbac6a","Type":"ContainerStarted","Data":"7fe580aab3a02dc552bff728a109ac4f48fcf5941f7b70260f789aabec5bde76"} Jan 26 11:28:42 crc kubenswrapper[4867]: I0126 11:28:42.644934 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" Jan 26 11:28:42 crc kubenswrapper[4867]: I0126 11:28:42.648389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2k86r" event={"ID":"a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a","Type":"ContainerStarted","Data":"1c0d7824fd112eb843be7ff70b730802cc019683fa420bcb5cc1f2c54d8bb852"} Jan 26 11:28:42 crc kubenswrapper[4867]: I0126 11:28:42.678535 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" podStartSLOduration=2.40216326 podStartE2EDuration="6.678501208s" podCreationTimestamp="2026-01-26 11:28:36 +0000 UTC" firstStartedPulling="2026-01-26 11:28:37.313491528 +0000 UTC m=+667.012066438" lastFinishedPulling="2026-01-26 11:28:41.589829476 +0000 UTC m=+671.288404386" observedRunningTime="2026-01-26 11:28:42.674100807 +0000 UTC m=+672.372675747" watchObservedRunningTime="2026-01-26 11:28:42.678501208 +0000 UTC m=+672.377076128" Jan 26 11:28:42 crc kubenswrapper[4867]: I0126 11:28:42.708040 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2k86r" podStartSLOduration=2.34421301 podStartE2EDuration="6.708016242s" podCreationTimestamp="2026-01-26 11:28:36 +0000 UTC" firstStartedPulling="2026-01-26 11:28:37.288776555 +0000 UTC m=+666.987351465" lastFinishedPulling="2026-01-26 11:28:41.652579777 +0000 UTC m=+671.351154697" observedRunningTime="2026-01-26 11:28:42.704580038 +0000 UTC m=+672.403154958" watchObservedRunningTime="2026-01-26 11:28:42.708016242 +0000 UTC m=+672.406591162" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.037608 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8ngn"] Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.038317 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-controller" containerID="cri-o://192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.038465 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="northd" containerID="cri-o://30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.038411 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="nbdb" containerID="cri-o://adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.038537 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="sbdb" containerID="cri-o://b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.038516 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.038530 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-node" containerID="cri-o://c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.038541 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-acl-logging" containerID="cri-o://99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.073777 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" containerID="cri-o://1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" gracePeriod=30 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.381862 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/3.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.384485 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovn-acl-logging/0.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.385065 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovn-controller/0.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.385553 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.440870 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-55w5m"] Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441112 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-acl-logging" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441126 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-acl-logging" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441134 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="sbdb" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441140 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="sbdb" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441150 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441163 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441169 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441176 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441181 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441193 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-node" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441200 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-node" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441207 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441213 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441241 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="nbdb" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441247 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="nbdb" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441257 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kubecfg-setup" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441264 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kubecfg-setup" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441278 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441284 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441294 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="northd" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441300 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="northd" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441396 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441406 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441414 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-acl-logging" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441421 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="sbdb" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441429 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441435 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovn-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441442 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="nbdb" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441447 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441456 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-node" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441463 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441473 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="northd" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441560 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441567 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441677 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.441797 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.441807 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerName="ovnkube-controller" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.443448 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.492383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-ovn-kubernetes\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.492904 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-systemd\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.492536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.492940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-var-lib-openvswitch\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.492967 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-node-log\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.492985 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-netns\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-env-overrides\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493042 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-openvswitch\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493046 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493078 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-kubelet\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493101 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-slash\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493137 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-log-socket\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493128 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493161 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-script-lib\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493186 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-config\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493198 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-node-log" (OuterVolumeSpecName: "node-log") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493213 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn6f7\" (UniqueName: \"kubernetes.io/projected/4a3be637-cf04-4c55-bf72-67fdad83cc44-kube-api-access-cn6f7\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493243 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-log-socket" (OuterVolumeSpecName: "log-socket") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493250 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-systemd-units\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493269 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-etc-openvswitch\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493298 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-slash" (OuterVolumeSpecName: "host-slash") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493301 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovn-node-metrics-cert\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493282 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493341 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-netd\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493367 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-ovn\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-bin\") pod \"4a3be637-cf04-4c55-bf72-67fdad83cc44\" (UID: \"4a3be637-cf04-4c55-bf72-67fdad83cc44\") " Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493531 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493559 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493593 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-ovn\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493615 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-slash\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-node-log\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493647 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-run-ovn-kubernetes\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-cni-netd\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovn-node-metrics-cert\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-var-lib-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-log-socket\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-etc-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-systemd-units\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493790 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-systemd\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493812 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-run-netns\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493828 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovnkube-config\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493854 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zvsv\" (UniqueName: \"kubernetes.io/projected/5afcc9eb-1b45-472f-8df2-6b8ed84db736-kube-api-access-6zvsv\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovnkube-script-lib\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493914 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-cni-bin\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493939 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-kubelet\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493963 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-env-overrides\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493999 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494010 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494020 4867 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494029 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494037 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494046 4867 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494074 4867 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494082 4867 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494092 4867 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.493961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494104 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494012 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494061 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494139 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494081 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494190 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.494302 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.500118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.500249 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a3be637-cf04-4c55-bf72-67fdad83cc44-kube-api-access-cn6f7" (OuterVolumeSpecName: "kube-api-access-cn6f7") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "kube-api-access-cn6f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.508696 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4a3be637-cf04-4c55-bf72-67fdad83cc44" (UID: "4a3be637-cf04-4c55-bf72-67fdad83cc44"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovnkube-script-lib\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-cni-bin\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595498 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-kubelet\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-env-overrides\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-ovn\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-slash\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-node-log\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-run-ovn-kubernetes\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595654 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-cni-netd\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovn-node-metrics-cert\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-var-lib-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-log-socket\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595742 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-etc-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-systemd-units\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-systemd\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-run-netns\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovnkube-config\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zvsv\" (UniqueName: \"kubernetes.io/projected/5afcc9eb-1b45-472f-8df2-6b8ed84db736-kube-api-access-6zvsv\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595959 4867 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595975 4867 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.595990 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596003 4867 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596014 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596023 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596032 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn6f7\" (UniqueName: \"kubernetes.io/projected/4a3be637-cf04-4c55-bf72-67fdad83cc44-kube-api-access-cn6f7\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596043 4867 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596052 4867 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596060 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a3be637-cf04-4c55-bf72-67fdad83cc44-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.596071 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a3be637-cf04-4c55-bf72-67fdad83cc44-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597081 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovnkube-script-lib\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597135 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-cni-bin\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-kubelet\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597410 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-node-log\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597457 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-ovn\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597510 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-slash\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-run-ovn-kubernetes\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-log-socket\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-etc-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-run-netns\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597652 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-systemd-units\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597763 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-run-systemd\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-var-lib-openvswitch\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.597633 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5afcc9eb-1b45-472f-8df2-6b8ed84db736-host-cni-netd\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.598121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-env-overrides\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.599466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovnkube-config\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.602241 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5afcc9eb-1b45-472f-8df2-6b8ed84db736-ovn-node-metrics-cert\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.614092 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zvsv\" (UniqueName: \"kubernetes.io/projected/5afcc9eb-1b45-472f-8df2-6b8ed84db736-kube-api-access-6zvsv\") pod \"ovnkube-node-55w5m\" (UID: \"5afcc9eb-1b45-472f-8df2-6b8ed84db736\") " pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.681255 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovnkube-controller/3.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.685462 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovn-acl-logging/0.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686072 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8ngn_4a3be637-cf04-4c55-bf72-67fdad83cc44/ovn-controller/0.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686611 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" exitCode=0 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686655 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" exitCode=0 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686695 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" exitCode=0 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686710 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" exitCode=0 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686722 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" exitCode=0 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686731 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" exitCode=0 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686742 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" exitCode=143 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686753 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a3be637-cf04-4c55-bf72-67fdad83cc44" containerID="192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" exitCode=143 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686904 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686911 4867 scope.go:117] "RemoveContainer" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.686934 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687259 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687274 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687281 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687287 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687293 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687299 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687307 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687314 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687321 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687331 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687345 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687354 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687361 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687370 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687377 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687383 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687389 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687395 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687400 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687407 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687419 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687434 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687444 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687451 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687459 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687466 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687472 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687481 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687488 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687496 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687504 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687515 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" event={"ID":"4a3be637-cf04-4c55-bf72-67fdad83cc44","Type":"ContainerDied","Data":"affdb9c7b34311edbd1f42a36193128c48cd80d81f59cdb5272ed04455ca22e4"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687527 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687536 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687542 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687550 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687557 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687564 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687571 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687580 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687589 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.687596 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.688492 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8ngn" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.689492 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/2.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.692058 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/1.log" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.692167 4867 generic.go:334] "Generic (PLEG): container finished" podID="dc37e5d1-ba44-4a54-ac36-ab7cdef17212" containerID="7e93ce40d6288a12790286c1b7deac52b0558ebca01037040f2f116daaed2f03" exitCode=2 Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.692247 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerDied","Data":"7e93ce40d6288a12790286c1b7deac52b0558ebca01037040f2f116daaed2f03"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.692291 4867 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866"} Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.693026 4867 scope.go:117] "RemoveContainer" containerID="7e93ce40d6288a12790286c1b7deac52b0558ebca01037040f2f116daaed2f03" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.693317 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hn8xr_openshift-multus(dc37e5d1-ba44-4a54-ac36-ab7cdef17212)\"" pod="openshift-multus/multus-hn8xr" podUID="dc37e5d1-ba44-4a54-ac36-ab7cdef17212" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.719351 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8ngn"] Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.725285 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.729941 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8ngn"] Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.751024 4867 scope.go:117] "RemoveContainer" containerID="b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.760866 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.777970 4867 scope.go:117] "RemoveContainer" containerID="adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.797785 4867 scope.go:117] "RemoveContainer" containerID="30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.812805 4867 scope.go:117] "RemoveContainer" containerID="d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.838075 4867 scope.go:117] "RemoveContainer" containerID="c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.857802 4867 scope.go:117] "RemoveContainer" containerID="99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.876893 4867 scope.go:117] "RemoveContainer" containerID="192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.900527 4867 scope.go:117] "RemoveContainer" containerID="388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.917716 4867 scope.go:117] "RemoveContainer" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.918544 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": container with ID starting with 1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc not found: ID does not exist" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.918582 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} err="failed to get container status \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": rpc error: code = NotFound desc = could not find container \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": container with ID starting with 1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.918611 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.919048 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": container with ID starting with 5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d not found: ID does not exist" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.919114 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} err="failed to get container status \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": rpc error: code = NotFound desc = could not find container \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": container with ID starting with 5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.919163 4867 scope.go:117] "RemoveContainer" containerID="b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.919705 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": container with ID starting with b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0 not found: ID does not exist" containerID="b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.919732 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} err="failed to get container status \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": rpc error: code = NotFound desc = could not find container \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": container with ID starting with b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.919751 4867 scope.go:117] "RemoveContainer" containerID="adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.920092 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": container with ID starting with adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a not found: ID does not exist" containerID="adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.920149 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} err="failed to get container status \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": rpc error: code = NotFound desc = could not find container \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": container with ID starting with adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.920193 4867 scope.go:117] "RemoveContainer" containerID="30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.920587 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": container with ID starting with 30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c not found: ID does not exist" containerID="30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.920617 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} err="failed to get container status \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": rpc error: code = NotFound desc = could not find container \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": container with ID starting with 30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.920633 4867 scope.go:117] "RemoveContainer" containerID="d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.920965 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": container with ID starting with d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199 not found: ID does not exist" containerID="d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.920990 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} err="failed to get container status \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": rpc error: code = NotFound desc = could not find container \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": container with ID starting with d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.921004 4867 scope.go:117] "RemoveContainer" containerID="c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.921269 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": container with ID starting with c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af not found: ID does not exist" containerID="c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.921291 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} err="failed to get container status \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": rpc error: code = NotFound desc = could not find container \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": container with ID starting with c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.921305 4867 scope.go:117] "RemoveContainer" containerID="99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.921543 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": container with ID starting with 99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799 not found: ID does not exist" containerID="99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.921578 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} err="failed to get container status \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": rpc error: code = NotFound desc = could not find container \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": container with ID starting with 99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.921607 4867 scope.go:117] "RemoveContainer" containerID="192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.921941 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": container with ID starting with 192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57 not found: ID does not exist" containerID="192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.921965 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} err="failed to get container status \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": rpc error: code = NotFound desc = could not find container \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": container with ID starting with 192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.921985 4867 scope.go:117] "RemoveContainer" containerID="388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b" Jan 26 11:28:46 crc kubenswrapper[4867]: E0126 11:28:46.922294 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": container with ID starting with 388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b not found: ID does not exist" containerID="388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.922366 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} err="failed to get container status \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": rpc error: code = NotFound desc = could not find container \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": container with ID starting with 388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.922392 4867 scope.go:117] "RemoveContainer" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.922820 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} err="failed to get container status \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": rpc error: code = NotFound desc = could not find container \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": container with ID starting with 1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.922851 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923180 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} err="failed to get container status \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": rpc error: code = NotFound desc = could not find container \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": container with ID starting with 5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923209 4867 scope.go:117] "RemoveContainer" containerID="b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923469 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} err="failed to get container status \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": rpc error: code = NotFound desc = could not find container \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": container with ID starting with b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923491 4867 scope.go:117] "RemoveContainer" containerID="adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923693 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} err="failed to get container status \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": rpc error: code = NotFound desc = could not find container \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": container with ID starting with adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923716 4867 scope.go:117] "RemoveContainer" containerID="30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923970 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} err="failed to get container status \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": rpc error: code = NotFound desc = could not find container \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": container with ID starting with 30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.923992 4867 scope.go:117] "RemoveContainer" containerID="d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.924415 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} err="failed to get container status \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": rpc error: code = NotFound desc = could not find container \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": container with ID starting with d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.924436 4867 scope.go:117] "RemoveContainer" containerID="c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.925509 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} err="failed to get container status \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": rpc error: code = NotFound desc = could not find container \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": container with ID starting with c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.925536 4867 scope.go:117] "RemoveContainer" containerID="99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.925760 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} err="failed to get container status \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": rpc error: code = NotFound desc = could not find container \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": container with ID starting with 99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.925781 4867 scope.go:117] "RemoveContainer" containerID="192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926000 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} err="failed to get container status \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": rpc error: code = NotFound desc = could not find container \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": container with ID starting with 192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926020 4867 scope.go:117] "RemoveContainer" containerID="388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926198 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} err="failed to get container status \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": rpc error: code = NotFound desc = could not find container \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": container with ID starting with 388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926212 4867 scope.go:117] "RemoveContainer" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926497 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} err="failed to get container status \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": rpc error: code = NotFound desc = could not find container \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": container with ID starting with 1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926518 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926811 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} err="failed to get container status \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": rpc error: code = NotFound desc = could not find container \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": container with ID starting with 5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.926831 4867 scope.go:117] "RemoveContainer" containerID="b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.927056 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} err="failed to get container status \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": rpc error: code = NotFound desc = could not find container \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": container with ID starting with b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.927096 4867 scope.go:117] "RemoveContainer" containerID="adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.927504 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} err="failed to get container status \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": rpc error: code = NotFound desc = could not find container \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": container with ID starting with adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.927526 4867 scope.go:117] "RemoveContainer" containerID="30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.927928 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} err="failed to get container status \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": rpc error: code = NotFound desc = could not find container \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": container with ID starting with 30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.927951 4867 scope.go:117] "RemoveContainer" containerID="d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.928206 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} err="failed to get container status \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": rpc error: code = NotFound desc = could not find container \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": container with ID starting with d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.928245 4867 scope.go:117] "RemoveContainer" containerID="c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.928575 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} err="failed to get container status \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": rpc error: code = NotFound desc = could not find container \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": container with ID starting with c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.928595 4867 scope.go:117] "RemoveContainer" containerID="99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.928870 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} err="failed to get container status \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": rpc error: code = NotFound desc = could not find container \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": container with ID starting with 99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.928895 4867 scope.go:117] "RemoveContainer" containerID="192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.929173 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} err="failed to get container status \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": rpc error: code = NotFound desc = could not find container \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": container with ID starting with 192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.929329 4867 scope.go:117] "RemoveContainer" containerID="388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.929612 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} err="failed to get container status \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": rpc error: code = NotFound desc = could not find container \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": container with ID starting with 388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.929634 4867 scope.go:117] "RemoveContainer" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.929995 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} err="failed to get container status \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": rpc error: code = NotFound desc = could not find container \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": container with ID starting with 1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.930019 4867 scope.go:117] "RemoveContainer" containerID="5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.930327 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d"} err="failed to get container status \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": rpc error: code = NotFound desc = could not find container \"5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d\": container with ID starting with 5d232dca466c33576da8850254342ae7f79c841e0f08fb49fedf655f3efe3d3d not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.930353 4867 scope.go:117] "RemoveContainer" containerID="b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.930633 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0"} err="failed to get container status \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": rpc error: code = NotFound desc = could not find container \"b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0\": container with ID starting with b58f62893fb0fc0bd355ef42fb5f15830912cfd3a03b5266cb3c3171dbb8bcb0 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.930654 4867 scope.go:117] "RemoveContainer" containerID="adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.930928 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a"} err="failed to get container status \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": rpc error: code = NotFound desc = could not find container \"adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a\": container with ID starting with adb68b135bd42f78556598af17aa21dc7daff1246e821ca51cb683e46974a85a not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.930948 4867 scope.go:117] "RemoveContainer" containerID="30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.931184 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c"} err="failed to get container status \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": rpc error: code = NotFound desc = could not find container \"30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c\": container with ID starting with 30fe31f1d314dace6e6ee0bc31b2127ce6e6db198975c78505b0f6f07aa6f30c not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.931207 4867 scope.go:117] "RemoveContainer" containerID="d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.931620 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199"} err="failed to get container status \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": rpc error: code = NotFound desc = could not find container \"d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199\": container with ID starting with d9025121ad416b7b6ff2f2eb7ecb3961eccbbae7b5ce4b394161ad99c5901199 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.931639 4867 scope.go:117] "RemoveContainer" containerID="c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.931961 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af"} err="failed to get container status \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": rpc error: code = NotFound desc = could not find container \"c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af\": container with ID starting with c0a7bea27cb1d32235548b7f7e457dbecaf0ac88bdd9866302c758680990d4af not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.931982 4867 scope.go:117] "RemoveContainer" containerID="99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.932285 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799"} err="failed to get container status \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": rpc error: code = NotFound desc = could not find container \"99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799\": container with ID starting with 99df8b0b5fa7178489716f40f806caff969c5045ab573ae343b222ed80bf0799 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.932306 4867 scope.go:117] "RemoveContainer" containerID="192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.932554 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57"} err="failed to get container status \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": rpc error: code = NotFound desc = could not find container \"192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57\": container with ID starting with 192f11212d63cdabfda1649589946ca65e5c4a38805c85b94e4277f2e8c7bf57 not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.932576 4867 scope.go:117] "RemoveContainer" containerID="388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.932834 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b"} err="failed to get container status \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": rpc error: code = NotFound desc = could not find container \"388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b\": container with ID starting with 388ea9d9185ed0fed9cbb0da08ab9aa6fcf1d81192fe2646cd51b54245c4e34b not found: ID does not exist" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.932856 4867 scope.go:117] "RemoveContainer" containerID="1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc" Jan 26 11:28:46 crc kubenswrapper[4867]: I0126 11:28:46.933209 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc"} err="failed to get container status \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": rpc error: code = NotFound desc = could not find container \"1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc\": container with ID starting with 1c6e65025a884869b644ff9fad0c22c39ffb69a73f335c11f33433b010a8e2dc not found: ID does not exist" Jan 26 11:28:47 crc kubenswrapper[4867]: I0126 11:28:47.020424 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rptrs" Jan 26 11:28:47 crc kubenswrapper[4867]: I0126 11:28:47.704123 4867 generic.go:334] "Generic (PLEG): container finished" podID="5afcc9eb-1b45-472f-8df2-6b8ed84db736" containerID="6d66ca937a7f5991b74001b746f56f63bbbef630154022d5e01e6854d0080efd" exitCode=0 Jan 26 11:28:47 crc kubenswrapper[4867]: I0126 11:28:47.704185 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerDied","Data":"6d66ca937a7f5991b74001b746f56f63bbbef630154022d5e01e6854d0080efd"} Jan 26 11:28:47 crc kubenswrapper[4867]: I0126 11:28:47.704231 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"70305849b68d730dbaba8777a41b5ae1a2cd6af65cbe6664fd73246ed8bf51af"} Jan 26 11:28:48 crc kubenswrapper[4867]: I0126 11:28:48.573735 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a3be637-cf04-4c55-bf72-67fdad83cc44" path="/var/lib/kubelet/pods/4a3be637-cf04-4c55-bf72-67fdad83cc44/volumes" Jan 26 11:28:48 crc kubenswrapper[4867]: I0126 11:28:48.720548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"789932ae9291e4e5b271e5f5802a23414d46acf03a08869104d01e9f9499d848"} Jan 26 11:28:48 crc kubenswrapper[4867]: I0126 11:28:48.720638 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"d49ea9a6efe77b8aa29aa3f72f2f3db92d69998890d108a4bcbfe86395162cc9"} Jan 26 11:28:48 crc kubenswrapper[4867]: I0126 11:28:48.720675 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"defcf078af25b854e19d670c7ad3ec453e22a81ce608084d7d767c6b9d150810"} Jan 26 11:28:48 crc kubenswrapper[4867]: I0126 11:28:48.720700 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"40095ab6f1b34ce269498bcb992b59925534ef9bfcb620820d2abd9bc041d2ce"} Jan 26 11:28:48 crc kubenswrapper[4867]: I0126 11:28:48.720724 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"bc1c42db8b887356e9e7c3f91ffb3e3a2eabc74d2efac385825d16620846d74f"} Jan 26 11:28:48 crc kubenswrapper[4867]: I0126 11:28:48.720745 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"3ec8ad6a8e2ee97788046655c3bd41f36d9705f8f45fad1af23dbe2d08b29904"} Jan 26 11:28:51 crc kubenswrapper[4867]: I0126 11:28:51.745767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"d5458fceb636773c5df8ed5fdfb78bd42ab4eea3fb821df361c64190bac36583"} Jan 26 11:28:53 crc kubenswrapper[4867]: I0126 11:28:53.761563 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" event={"ID":"5afcc9eb-1b45-472f-8df2-6b8ed84db736","Type":"ContainerStarted","Data":"0092c931109f080f55e1500b8b5d686150e8bf0638faa7f55f22d896b1014bee"} Jan 26 11:28:53 crc kubenswrapper[4867]: I0126 11:28:53.762100 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:53 crc kubenswrapper[4867]: I0126 11:28:53.762118 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:53 crc kubenswrapper[4867]: I0126 11:28:53.762133 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:53 crc kubenswrapper[4867]: I0126 11:28:53.794487 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:53 crc kubenswrapper[4867]: I0126 11:28:53.799300 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:28:53 crc kubenswrapper[4867]: I0126 11:28:53.802540 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" podStartSLOduration=7.802515683 podStartE2EDuration="7.802515683s" podCreationTimestamp="2026-01-26 11:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:28:53.797545228 +0000 UTC m=+683.496120148" watchObservedRunningTime="2026-01-26 11:28:53.802515683 +0000 UTC m=+683.501090593" Jan 26 11:28:59 crc kubenswrapper[4867]: I0126 11:28:59.563877 4867 scope.go:117] "RemoveContainer" containerID="7e93ce40d6288a12790286c1b7deac52b0558ebca01037040f2f116daaed2f03" Jan 26 11:28:59 crc kubenswrapper[4867]: E0126 11:28:59.564902 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hn8xr_openshift-multus(dc37e5d1-ba44-4a54-ac36-ab7cdef17212)\"" pod="openshift-multus/multus-hn8xr" podUID="dc37e5d1-ba44-4a54-ac36-ab7cdef17212" Jan 26 11:29:12 crc kubenswrapper[4867]: I0126 11:29:12.568694 4867 scope.go:117] "RemoveContainer" containerID="7e93ce40d6288a12790286c1b7deac52b0558ebca01037040f2f116daaed2f03" Jan 26 11:29:12 crc kubenswrapper[4867]: I0126 11:29:12.888176 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/2.log" Jan 26 11:29:12 crc kubenswrapper[4867]: I0126 11:29:12.889255 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/1.log" Jan 26 11:29:12 crc kubenswrapper[4867]: I0126 11:29:12.889327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hn8xr" event={"ID":"dc37e5d1-ba44-4a54-ac36-ab7cdef17212","Type":"ContainerStarted","Data":"05c109407e9870cc3ca2fcd6826125ca6aa089d8ecf8693505cc7b49a77bb0e3"} Jan 26 11:29:16 crc kubenswrapper[4867]: I0126 11:29:16.792095 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-55w5m" Jan 26 11:29:20 crc kubenswrapper[4867]: I0126 11:29:20.812251 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz"] Jan 26 11:29:20 crc kubenswrapper[4867]: I0126 11:29:20.813818 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:20 crc kubenswrapper[4867]: I0126 11:29:20.816339 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 11:29:20 crc kubenswrapper[4867]: I0126 11:29:20.817632 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz"] Jan 26 11:29:20 crc kubenswrapper[4867]: I0126 11:29:20.923373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqcw\" (UniqueName: \"kubernetes.io/projected/4d062b30-7ca4-4191-89d4-21c153fbf3dc-kube-api-access-rzqcw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:20 crc kubenswrapper[4867]: I0126 11:29:20.923436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:20 crc kubenswrapper[4867]: I0126 11:29:20.923478 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.025625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqcw\" (UniqueName: \"kubernetes.io/projected/4d062b30-7ca4-4191-89d4-21c153fbf3dc-kube-api-access-rzqcw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.025735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.025779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.026668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.026708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.053535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqcw\" (UniqueName: \"kubernetes.io/projected/4d062b30-7ca4-4191-89d4-21c153fbf3dc-kube-api-access-rzqcw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.136299 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.468372 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz"] Jan 26 11:29:21 crc kubenswrapper[4867]: I0126 11:29:21.992371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" event={"ID":"4d062b30-7ca4-4191-89d4-21c153fbf3dc","Type":"ContainerStarted","Data":"34ea12b8ebf409e42e0705a94893418b181f77fd52d510127a6f186508478385"} Jan 26 11:29:22 crc kubenswrapper[4867]: I0126 11:29:22.999282 4867 generic.go:334] "Generic (PLEG): container finished" podID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerID="dfad0912d7d4ab8834f8cf55d24d3af61fb8be7160bd1e9ab00fe01e0ac780ad" exitCode=0 Jan 26 11:29:22 crc kubenswrapper[4867]: I0126 11:29:22.999382 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" event={"ID":"4d062b30-7ca4-4191-89d4-21c153fbf3dc","Type":"ContainerDied","Data":"dfad0912d7d4ab8834f8cf55d24d3af61fb8be7160bd1e9ab00fe01e0ac780ad"} Jan 26 11:29:25 crc kubenswrapper[4867]: I0126 11:29:25.017565 4867 generic.go:334] "Generic (PLEG): container finished" podID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerID="742182e914210ce10ede93052159bb87fdf5ac78c7a50efd4d27036302f06255" exitCode=0 Jan 26 11:29:25 crc kubenswrapper[4867]: I0126 11:29:25.017652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" event={"ID":"4d062b30-7ca4-4191-89d4-21c153fbf3dc","Type":"ContainerDied","Data":"742182e914210ce10ede93052159bb87fdf5ac78c7a50efd4d27036302f06255"} Jan 26 11:29:26 crc kubenswrapper[4867]: I0126 11:29:26.027949 4867 generic.go:334] "Generic (PLEG): container finished" podID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerID="c110e5fba33434162906c201d3f0814c6b9775d21b65a7982b878d9a98b0f3a3" exitCode=0 Jan 26 11:29:26 crc kubenswrapper[4867]: I0126 11:29:26.028000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" event={"ID":"4d062b30-7ca4-4191-89d4-21c153fbf3dc","Type":"ContainerDied","Data":"c110e5fba33434162906c201d3f0814c6b9775d21b65a7982b878d9a98b0f3a3"} Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.340789 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.528656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-util\") pod \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.528712 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqcw\" (UniqueName: \"kubernetes.io/projected/4d062b30-7ca4-4191-89d4-21c153fbf3dc-kube-api-access-rzqcw\") pod \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.528740 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-bundle\") pod \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\" (UID: \"4d062b30-7ca4-4191-89d4-21c153fbf3dc\") " Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.529623 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-bundle" (OuterVolumeSpecName: "bundle") pod "4d062b30-7ca4-4191-89d4-21c153fbf3dc" (UID: "4d062b30-7ca4-4191-89d4-21c153fbf3dc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.538127 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d062b30-7ca4-4191-89d4-21c153fbf3dc-kube-api-access-rzqcw" (OuterVolumeSpecName: "kube-api-access-rzqcw") pod "4d062b30-7ca4-4191-89d4-21c153fbf3dc" (UID: "4d062b30-7ca4-4191-89d4-21c153fbf3dc"). InnerVolumeSpecName "kube-api-access-rzqcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.560897 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-util" (OuterVolumeSpecName: "util") pod "4d062b30-7ca4-4191-89d4-21c153fbf3dc" (UID: "4d062b30-7ca4-4191-89d4-21c153fbf3dc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.630302 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-util\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.630337 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzqcw\" (UniqueName: \"kubernetes.io/projected/4d062b30-7ca4-4191-89d4-21c153fbf3dc-kube-api-access-rzqcw\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:27 crc kubenswrapper[4867]: I0126 11:29:27.630347 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d062b30-7ca4-4191-89d4-21c153fbf3dc-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:29:28 crc kubenswrapper[4867]: I0126 11:29:28.043730 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" event={"ID":"4d062b30-7ca4-4191-89d4-21c153fbf3dc","Type":"ContainerDied","Data":"34ea12b8ebf409e42e0705a94893418b181f77fd52d510127a6f186508478385"} Jan 26 11:29:28 crc kubenswrapper[4867]: I0126 11:29:28.044363 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ea12b8ebf409e42e0705a94893418b181f77fd52d510127a6f186508478385" Jan 26 11:29:28 crc kubenswrapper[4867]: I0126 11:29:28.043792 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.077324 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wqlhb"] Jan 26 11:29:29 crc kubenswrapper[4867]: E0126 11:29:29.077619 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerName="extract" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.077639 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerName="extract" Jan 26 11:29:29 crc kubenswrapper[4867]: E0126 11:29:29.077658 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerName="pull" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.077665 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerName="pull" Jan 26 11:29:29 crc kubenswrapper[4867]: E0126 11:29:29.077678 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerName="util" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.077686 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerName="util" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.077813 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d062b30-7ca4-4191-89d4-21c153fbf3dc" containerName="extract" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.078368 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.080504 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.080742 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.082291 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qwjf7" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.096372 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wqlhb"] Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.253786 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszt2\" (UniqueName: \"kubernetes.io/projected/5d46639d-9922-4557-a7f2-d40917695fef-kube-api-access-nszt2\") pod \"nmstate-operator-646758c888-wqlhb\" (UID: \"5d46639d-9922-4557-a7f2-d40917695fef\") " pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.355905 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nszt2\" (UniqueName: \"kubernetes.io/projected/5d46639d-9922-4557-a7f2-d40917695fef-kube-api-access-nszt2\") pod \"nmstate-operator-646758c888-wqlhb\" (UID: \"5d46639d-9922-4557-a7f2-d40917695fef\") " pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.375041 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nszt2\" (UniqueName: \"kubernetes.io/projected/5d46639d-9922-4557-a7f2-d40917695fef-kube-api-access-nszt2\") pod \"nmstate-operator-646758c888-wqlhb\" (UID: \"5d46639d-9922-4557-a7f2-d40917695fef\") " pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.396347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" Jan 26 11:29:29 crc kubenswrapper[4867]: I0126 11:29:29.601374 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wqlhb"] Jan 26 11:29:30 crc kubenswrapper[4867]: I0126 11:29:30.059534 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" event={"ID":"5d46639d-9922-4557-a7f2-d40917695fef","Type":"ContainerStarted","Data":"184ab0f46ad5de32ef37d3f32cfb5a0f8be8f75230433852b78995041823c4b7"} Jan 26 11:29:30 crc kubenswrapper[4867]: I0126 11:29:30.864570 4867 scope.go:117] "RemoveContainer" containerID="0560f844140132daa11aa58fcba689fc945d9e6bc67a4fc5598dfaf566749866" Jan 26 11:29:31 crc kubenswrapper[4867]: I0126 11:29:31.067255 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hn8xr_dc37e5d1-ba44-4a54-ac36-ab7cdef17212/kube-multus/2.log" Jan 26 11:29:32 crc kubenswrapper[4867]: I0126 11:29:32.077114 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" event={"ID":"5d46639d-9922-4557-a7f2-d40917695fef","Type":"ContainerStarted","Data":"e0a1d8a004c0a2320bb5bd04bb1a9fc5d82d55729dd525beaead8901313986de"} Jan 26 11:29:32 crc kubenswrapper[4867]: I0126 11:29:32.098071 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-wqlhb" podStartSLOduration=0.893437748 podStartE2EDuration="3.098048713s" podCreationTimestamp="2026-01-26 11:29:29 +0000 UTC" firstStartedPulling="2026-01-26 11:29:29.612387713 +0000 UTC m=+719.310962623" lastFinishedPulling="2026-01-26 11:29:31.816998678 +0000 UTC m=+721.515573588" observedRunningTime="2026-01-26 11:29:32.096978353 +0000 UTC m=+721.795553283" watchObservedRunningTime="2026-01-26 11:29:32.098048713 +0000 UTC m=+721.796623623" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.110345 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9jgvk"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.112378 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.115463 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fhb6m" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.134920 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.136015 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.139918 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9jgvk"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.141415 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.154125 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.176726 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jhqkb"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.190938 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.214335 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmxfz\" (UniqueName: \"kubernetes.io/projected/2d4cf215-bd64-4e38-8e9b-ea2b90e36137-kube-api-access-wmxfz\") pod \"nmstate-metrics-54757c584b-9jgvk\" (UID: \"2d4cf215-bd64-4e38-8e9b-ea2b90e36137\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.214465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zttkf\" (UID: \"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.214571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcwv4\" (UniqueName: \"kubernetes.io/projected/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-kube-api-access-qcwv4\") pod \"nmstate-webhook-8474b5b9d8-zttkf\" (UID: \"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.316039 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zttkf\" (UID: \"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.316129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcwv4\" (UniqueName: \"kubernetes.io/projected/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-kube-api-access-qcwv4\") pod \"nmstate-webhook-8474b5b9d8-zttkf\" (UID: \"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.316171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmxfz\" (UniqueName: \"kubernetes.io/projected/2d4cf215-bd64-4e38-8e9b-ea2b90e36137-kube-api-access-wmxfz\") pod \"nmstate-metrics-54757c584b-9jgvk\" (UID: \"2d4cf215-bd64-4e38-8e9b-ea2b90e36137\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.316206 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-dbus-socket\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.316265 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-nmstate-lock\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.316287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-ovs-socket\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.316313 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ng58\" (UniqueName: \"kubernetes.io/projected/b1ffa812-b614-4e1d-a243-bea92b55da60-kube-api-access-9ng58\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: E0126 11:29:33.316482 4867 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 11:29:33 crc kubenswrapper[4867]: E0126 11:29:33.316540 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-tls-key-pair podName:72e3b4aa-81dd-4ae0-aa28-35c7092e98fd nodeName:}" failed. No retries permitted until 2026-01-26 11:29:33.816518591 +0000 UTC m=+723.515093501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-zttkf" (UID: "72e3b4aa-81dd-4ae0-aa28-35c7092e98fd") : secret "openshift-nmstate-webhook" not found Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.339577 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmxfz\" (UniqueName: \"kubernetes.io/projected/2d4cf215-bd64-4e38-8e9b-ea2b90e36137-kube-api-access-wmxfz\") pod \"nmstate-metrics-54757c584b-9jgvk\" (UID: \"2d4cf215-bd64-4e38-8e9b-ea2b90e36137\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.341256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcwv4\" (UniqueName: \"kubernetes.io/projected/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-kube-api-access-qcwv4\") pod \"nmstate-webhook-8474b5b9d8-zttkf\" (UID: \"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.398762 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.399876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.403510 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.404333 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.404547 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-db88r" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.410753 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.418383 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-dbus-socket\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.418462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-nmstate-lock\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.418488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-ovs-socket\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.418513 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ng58\" (UniqueName: \"kubernetes.io/projected/b1ffa812-b614-4e1d-a243-bea92b55da60-kube-api-access-9ng58\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.418702 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-nmstate-lock\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.418774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-ovs-socket\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.419029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b1ffa812-b614-4e1d-a243-bea92b55da60-dbus-socket\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.434238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.446820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ng58\" (UniqueName: \"kubernetes.io/projected/b1ffa812-b614-4e1d-a243-bea92b55da60-kube-api-access-9ng58\") pod \"nmstate-handler-jhqkb\" (UID: \"b1ffa812-b614-4e1d-a243-bea92b55da60\") " pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.508164 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.519935 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5496960a-d548-45d1-b1af-46a2019c8258-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.520005 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5496960a-d548-45d1-b1af-46a2019c8258-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.520089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlzf\" (UniqueName: \"kubernetes.io/projected/5496960a-d548-45d1-b1af-46a2019c8258-kube-api-access-2wlzf\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: W0126 11:29:33.566863 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ffa812_b614_4e1d_a243_bea92b55da60.slice/crio-14f7ee3b9a39df565d5dffb2e9f15e365a24b13d61b3a085da8294dfed827d60 WatchSource:0}: Error finding container 14f7ee3b9a39df565d5dffb2e9f15e365a24b13d61b3a085da8294dfed827d60: Status 404 returned error can't find the container with id 14f7ee3b9a39df565d5dffb2e9f15e365a24b13d61b3a085da8294dfed827d60 Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.624644 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wlzf\" (UniqueName: \"kubernetes.io/projected/5496960a-d548-45d1-b1af-46a2019c8258-kube-api-access-2wlzf\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.624720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5496960a-d548-45d1-b1af-46a2019c8258-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.624753 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5496960a-d548-45d1-b1af-46a2019c8258-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.628164 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5496960a-d548-45d1-b1af-46a2019c8258-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.638554 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5496960a-d548-45d1-b1af-46a2019c8258-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.641785 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6589bd55cf-t52qd"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.642601 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.648340 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6589bd55cf-t52qd"] Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.651300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wlzf\" (UniqueName: \"kubernetes.io/projected/5496960a-d548-45d1-b1af-46a2019c8258-kube-api-access-2wlzf\") pod \"nmstate-console-plugin-7754f76f8b-9mhpj\" (UID: \"5496960a-d548-45d1-b1af-46a2019c8258\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.692139 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9jgvk"] Jan 26 11:29:33 crc kubenswrapper[4867]: W0126 11:29:33.697124 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d4cf215_bd64_4e38_8e9b_ea2b90e36137.slice/crio-380be9e12904ec932c7f65928f6e5e15413ec01d162378ac57680a882738bdf1 WatchSource:0}: Error finding container 380be9e12904ec932c7f65928f6e5e15413ec01d162378ac57680a882738bdf1: Status 404 returned error can't find the container with id 380be9e12904ec932c7f65928f6e5e15413ec01d162378ac57680a882738bdf1 Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.719690 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.726393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7kq\" (UniqueName: \"kubernetes.io/projected/40600058-e7b0-4038-b17b-98bb655db045-kube-api-access-fl7kq\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.726455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-service-ca\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.726489 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/40600058-e7b0-4038-b17b-98bb655db045-console-oauth-config\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.726525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-oauth-serving-cert\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.726549 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/40600058-e7b0-4038-b17b-98bb655db045-console-serving-cert\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.726566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-trusted-ca-bundle\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.726592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-console-config\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-service-ca\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/40600058-e7b0-4038-b17b-98bb655db045-console-oauth-config\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-oauth-serving-cert\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/40600058-e7b0-4038-b17b-98bb655db045-console-serving-cert\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-trusted-ca-bundle\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828760 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-console-config\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828823 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zttkf\" (UID: \"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.828860 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7kq\" (UniqueName: \"kubernetes.io/projected/40600058-e7b0-4038-b17b-98bb655db045-kube-api-access-fl7kq\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.831143 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-trusted-ca-bundle\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.831764 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-console-config\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.833550 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/40600058-e7b0-4038-b17b-98bb655db045-console-serving-cert\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.834256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-oauth-serving-cert\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.834946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/40600058-e7b0-4038-b17b-98bb655db045-service-ca\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.834970 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72e3b4aa-81dd-4ae0-aa28-35c7092e98fd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zttkf\" (UID: \"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.835742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/40600058-e7b0-4038-b17b-98bb655db045-console-oauth-config\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.863097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7kq\" (UniqueName: \"kubernetes.io/projected/40600058-e7b0-4038-b17b-98bb655db045-kube-api-access-fl7kq\") pod \"console-6589bd55cf-t52qd\" (UID: \"40600058-e7b0-4038-b17b-98bb655db045\") " pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.938055 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj"] Jan 26 11:29:33 crc kubenswrapper[4867]: W0126 11:29:33.948761 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5496960a_d548_45d1_b1af_46a2019c8258.slice/crio-e207c96bac1c0534235bd0df388a2ddfed4014a155844918d1cd00d06b67bdc1 WatchSource:0}: Error finding container e207c96bac1c0534235bd0df388a2ddfed4014a155844918d1cd00d06b67bdc1: Status 404 returned error can't find the container with id e207c96bac1c0534235bd0df388a2ddfed4014a155844918d1cd00d06b67bdc1 Jan 26 11:29:33 crc kubenswrapper[4867]: I0126 11:29:33.970584 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:34 crc kubenswrapper[4867]: I0126 11:29:34.052132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:34 crc kubenswrapper[4867]: I0126 11:29:34.090858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" event={"ID":"2d4cf215-bd64-4e38-8e9b-ea2b90e36137","Type":"ContainerStarted","Data":"380be9e12904ec932c7f65928f6e5e15413ec01d162378ac57680a882738bdf1"} Jan 26 11:29:34 crc kubenswrapper[4867]: I0126 11:29:34.093618 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jhqkb" event={"ID":"b1ffa812-b614-4e1d-a243-bea92b55da60","Type":"ContainerStarted","Data":"14f7ee3b9a39df565d5dffb2e9f15e365a24b13d61b3a085da8294dfed827d60"} Jan 26 11:29:34 crc kubenswrapper[4867]: I0126 11:29:34.095458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" event={"ID":"5496960a-d548-45d1-b1af-46a2019c8258","Type":"ContainerStarted","Data":"e207c96bac1c0534235bd0df388a2ddfed4014a155844918d1cd00d06b67bdc1"} Jan 26 11:29:34 crc kubenswrapper[4867]: I0126 11:29:34.226192 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6589bd55cf-t52qd"] Jan 26 11:29:34 crc kubenswrapper[4867]: I0126 11:29:34.283378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf"] Jan 26 11:29:34 crc kubenswrapper[4867]: W0126 11:29:34.292555 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e3b4aa_81dd_4ae0_aa28_35c7092e98fd.slice/crio-871a4a16d00e7fe3fa32477a1e03b8b589f05b0ec1120ff59249345d2771f8d9 WatchSource:0}: Error finding container 871a4a16d00e7fe3fa32477a1e03b8b589f05b0ec1120ff59249345d2771f8d9: Status 404 returned error can't find the container with id 871a4a16d00e7fe3fa32477a1e03b8b589f05b0ec1120ff59249345d2771f8d9 Jan 26 11:29:35 crc kubenswrapper[4867]: I0126 11:29:35.105938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" event={"ID":"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd","Type":"ContainerStarted","Data":"871a4a16d00e7fe3fa32477a1e03b8b589f05b0ec1120ff59249345d2771f8d9"} Jan 26 11:29:35 crc kubenswrapper[4867]: I0126 11:29:35.108404 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6589bd55cf-t52qd" event={"ID":"40600058-e7b0-4038-b17b-98bb655db045","Type":"ContainerStarted","Data":"58d0c05ffa63b6d24135f403a96fc06b696a08a95ac37282a6794fae420b6de6"} Jan 26 11:29:35 crc kubenswrapper[4867]: I0126 11:29:35.108433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6589bd55cf-t52qd" event={"ID":"40600058-e7b0-4038-b17b-98bb655db045","Type":"ContainerStarted","Data":"b36440d3ff3a83ba5174dc010a767adde5a83cbc4d7e9033fb2afb5fb1a87e6c"} Jan 26 11:29:35 crc kubenswrapper[4867]: I0126 11:29:35.138590 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6589bd55cf-t52qd" podStartSLOduration=2.138566102 podStartE2EDuration="2.138566102s" podCreationTimestamp="2026-01-26 11:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:29:35.130282135 +0000 UTC m=+724.828857065" watchObservedRunningTime="2026-01-26 11:29:35.138566102 +0000 UTC m=+724.837141012" Jan 26 11:29:36 crc kubenswrapper[4867]: I0126 11:29:36.294366 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:29:36 crc kubenswrapper[4867]: I0126 11:29:36.294508 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.123617 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" event={"ID":"5496960a-d548-45d1-b1af-46a2019c8258","Type":"ContainerStarted","Data":"8d85bdedc1e7b561dc9b7a4a7b91823fdb5e192b7e017d1a0cd74022f29dd76a"} Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.127259 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" event={"ID":"2d4cf215-bd64-4e38-8e9b-ea2b90e36137","Type":"ContainerStarted","Data":"a639a2de2ac9eed13ad5e08c9330a7a1e5b45cceae1fcb976927a92d061ac027"} Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.130187 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jhqkb" event={"ID":"b1ffa812-b614-4e1d-a243-bea92b55da60","Type":"ContainerStarted","Data":"dbd7d419828512d75a2523fe89b87acd1a88d73b3da4641bb88b1174207ff45a"} Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.130344 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.131876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" event={"ID":"72e3b4aa-81dd-4ae0-aa28-35c7092e98fd","Type":"ContainerStarted","Data":"3bdb99e6f32ba50de477032f4d0022122b63e1582c3b059c3c898c09974c92fc"} Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.132004 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.179165 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" podStartSLOduration=1.674220719 podStartE2EDuration="4.179140276s" podCreationTimestamp="2026-01-26 11:29:33 +0000 UTC" firstStartedPulling="2026-01-26 11:29:34.295150562 +0000 UTC m=+723.993725472" lastFinishedPulling="2026-01-26 11:29:36.800070119 +0000 UTC m=+726.498645029" observedRunningTime="2026-01-26 11:29:37.178248242 +0000 UTC m=+726.876823152" watchObservedRunningTime="2026-01-26 11:29:37.179140276 +0000 UTC m=+726.877715186" Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.180964 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-9mhpj" podStartSLOduration=1.334058926 podStartE2EDuration="4.180957986s" podCreationTimestamp="2026-01-26 11:29:33 +0000 UTC" firstStartedPulling="2026-01-26 11:29:33.951475703 +0000 UTC m=+723.650050613" lastFinishedPulling="2026-01-26 11:29:36.798374763 +0000 UTC m=+726.496949673" observedRunningTime="2026-01-26 11:29:37.151436938 +0000 UTC m=+726.850011848" watchObservedRunningTime="2026-01-26 11:29:37.180957986 +0000 UTC m=+726.879532906" Jan 26 11:29:37 crc kubenswrapper[4867]: I0126 11:29:37.201279 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jhqkb" podStartSLOduration=0.977079003 podStartE2EDuration="4.201257831s" podCreationTimestamp="2026-01-26 11:29:33 +0000 UTC" firstStartedPulling="2026-01-26 11:29:33.574644617 +0000 UTC m=+723.273219527" lastFinishedPulling="2026-01-26 11:29:36.798823445 +0000 UTC m=+726.497398355" observedRunningTime="2026-01-26 11:29:37.197463368 +0000 UTC m=+726.896038288" watchObservedRunningTime="2026-01-26 11:29:37.201257831 +0000 UTC m=+726.899832741" Jan 26 11:29:41 crc kubenswrapper[4867]: I0126 11:29:41.166919 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" event={"ID":"2d4cf215-bd64-4e38-8e9b-ea2b90e36137","Type":"ContainerStarted","Data":"c0dfc18075844b8edda3d27cdd39044daeb294cb1b9f20ca93b4922e2f5c8181"} Jan 26 11:29:41 crc kubenswrapper[4867]: I0126 11:29:41.190458 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-9jgvk" podStartSLOduration=1.921784377 podStartE2EDuration="8.190430452s" podCreationTimestamp="2026-01-26 11:29:33 +0000 UTC" firstStartedPulling="2026-01-26 11:29:33.6996924 +0000 UTC m=+723.398267310" lastFinishedPulling="2026-01-26 11:29:39.968338475 +0000 UTC m=+729.666913385" observedRunningTime="2026-01-26 11:29:41.189456286 +0000 UTC m=+730.888031196" watchObservedRunningTime="2026-01-26 11:29:41.190430452 +0000 UTC m=+730.889005362" Jan 26 11:29:43 crc kubenswrapper[4867]: I0126 11:29:43.541046 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jhqkb" Jan 26 11:29:43 crc kubenswrapper[4867]: I0126 11:29:43.971079 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:43 crc kubenswrapper[4867]: I0126 11:29:43.971263 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:43 crc kubenswrapper[4867]: I0126 11:29:43.979245 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:44 crc kubenswrapper[4867]: I0126 11:29:44.197359 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6589bd55cf-t52qd" Jan 26 11:29:44 crc kubenswrapper[4867]: I0126 11:29:44.253110 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dc94j"] Jan 26 11:29:54 crc kubenswrapper[4867]: I0126 11:29:54.061813 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zttkf" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.150549 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj"] Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.153431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.155848 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.156493 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.160114 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj"] Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.232318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pxd9\" (UniqueName: \"kubernetes.io/projected/d785220c-c0b5-456d-9896-b35b1ed5ce1a-kube-api-access-4pxd9\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.232380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d785220c-c0b5-456d-9896-b35b1ed5ce1a-config-volume\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.232418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d785220c-c0b5-456d-9896-b35b1ed5ce1a-secret-volume\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.333918 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pxd9\" (UniqueName: \"kubernetes.io/projected/d785220c-c0b5-456d-9896-b35b1ed5ce1a-kube-api-access-4pxd9\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.334010 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d785220c-c0b5-456d-9896-b35b1ed5ce1a-config-volume\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.334059 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d785220c-c0b5-456d-9896-b35b1ed5ce1a-secret-volume\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.335670 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d785220c-c0b5-456d-9896-b35b1ed5ce1a-config-volume\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.341510 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d785220c-c0b5-456d-9896-b35b1ed5ce1a-secret-volume\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.354241 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pxd9\" (UniqueName: \"kubernetes.io/projected/d785220c-c0b5-456d-9896-b35b1ed5ce1a-kube-api-access-4pxd9\") pod \"collect-profiles-29490450-kjxjj\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.476860 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:00 crc kubenswrapper[4867]: I0126 11:30:00.753481 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj"] Jan 26 11:30:00 crc kubenswrapper[4867]: W0126 11:30:00.759624 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd785220c_c0b5_456d_9896_b35b1ed5ce1a.slice/crio-8d6cd0f12f0f6d3bd4560967cc47a9537a8af8487f6644badbe53a688a5e4e9d WatchSource:0}: Error finding container 8d6cd0f12f0f6d3bd4560967cc47a9537a8af8487f6644badbe53a688a5e4e9d: Status 404 returned error can't find the container with id 8d6cd0f12f0f6d3bd4560967cc47a9537a8af8487f6644badbe53a688a5e4e9d Jan 26 11:30:01 crc kubenswrapper[4867]: I0126 11:30:01.306690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" event={"ID":"d785220c-c0b5-456d-9896-b35b1ed5ce1a","Type":"ContainerStarted","Data":"8d6cd0f12f0f6d3bd4560967cc47a9537a8af8487f6644badbe53a688a5e4e9d"} Jan 26 11:30:03 crc kubenswrapper[4867]: I0126 11:30:03.326914 4867 generic.go:334] "Generic (PLEG): container finished" podID="d785220c-c0b5-456d-9896-b35b1ed5ce1a" containerID="2b32ab546d7ac75c166bd723e005ac2af04a9a796fa520fda63559049319ef8d" exitCode=0 Jan 26 11:30:03 crc kubenswrapper[4867]: I0126 11:30:03.327076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" event={"ID":"d785220c-c0b5-456d-9896-b35b1ed5ce1a","Type":"ContainerDied","Data":"2b32ab546d7ac75c166bd723e005ac2af04a9a796fa520fda63559049319ef8d"} Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.613306 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.719417 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d785220c-c0b5-456d-9896-b35b1ed5ce1a-secret-volume\") pod \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.719707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pxd9\" (UniqueName: \"kubernetes.io/projected/d785220c-c0b5-456d-9896-b35b1ed5ce1a-kube-api-access-4pxd9\") pod \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.719776 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d785220c-c0b5-456d-9896-b35b1ed5ce1a-config-volume\") pod \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\" (UID: \"d785220c-c0b5-456d-9896-b35b1ed5ce1a\") " Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.720736 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d785220c-c0b5-456d-9896-b35b1ed5ce1a-config-volume" (OuterVolumeSpecName: "config-volume") pod "d785220c-c0b5-456d-9896-b35b1ed5ce1a" (UID: "d785220c-c0b5-456d-9896-b35b1ed5ce1a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.727239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d785220c-c0b5-456d-9896-b35b1ed5ce1a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d785220c-c0b5-456d-9896-b35b1ed5ce1a" (UID: "d785220c-c0b5-456d-9896-b35b1ed5ce1a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.732573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d785220c-c0b5-456d-9896-b35b1ed5ce1a-kube-api-access-4pxd9" (OuterVolumeSpecName: "kube-api-access-4pxd9") pod "d785220c-c0b5-456d-9896-b35b1ed5ce1a" (UID: "d785220c-c0b5-456d-9896-b35b1ed5ce1a"). InnerVolumeSpecName "kube-api-access-4pxd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.821442 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pxd9\" (UniqueName: \"kubernetes.io/projected/d785220c-c0b5-456d-9896-b35b1ed5ce1a-kube-api-access-4pxd9\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.821486 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d785220c-c0b5-456d-9896-b35b1ed5ce1a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:04 crc kubenswrapper[4867]: I0126 11:30:04.821496 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d785220c-c0b5-456d-9896-b35b1ed5ce1a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:05 crc kubenswrapper[4867]: I0126 11:30:05.347900 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" event={"ID":"d785220c-c0b5-456d-9896-b35b1ed5ce1a","Type":"ContainerDied","Data":"8d6cd0f12f0f6d3bd4560967cc47a9537a8af8487f6644badbe53a688a5e4e9d"} Jan 26 11:30:05 crc kubenswrapper[4867]: I0126 11:30:05.348202 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6cd0f12f0f6d3bd4560967cc47a9537a8af8487f6644badbe53a688a5e4e9d" Jan 26 11:30:05 crc kubenswrapper[4867]: I0126 11:30:05.348008 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490450-kjxjj" Jan 26 11:30:06 crc kubenswrapper[4867]: I0126 11:30:06.294578 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:30:06 crc kubenswrapper[4867]: I0126 11:30:06.295042 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.317264 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dc94j" podUID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" containerName="console" containerID="cri-o://4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9" gracePeriod=15 Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.778583 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dc94j_a721247b-3436-4bb4-bc5c-ab4e94db0b41/console/0.log" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.779351 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.911377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-trusted-ca-bundle\") pod \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.912491 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-oauth-config\") pod \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.912600 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6lpb\" (UniqueName: \"kubernetes.io/projected/a721247b-3436-4bb4-bc5c-ab4e94db0b41-kube-api-access-n6lpb\") pod \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.912661 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-config\") pod \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.912717 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-oauth-serving-cert\") pod \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.912713 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a721247b-3436-4bb4-bc5c-ab4e94db0b41" (UID: "a721247b-3436-4bb4-bc5c-ab4e94db0b41"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.912753 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-service-ca\") pod \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.912859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-serving-cert\") pod \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\" (UID: \"a721247b-3436-4bb4-bc5c-ab4e94db0b41\") " Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.913428 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.916364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-config" (OuterVolumeSpecName: "console-config") pod "a721247b-3436-4bb4-bc5c-ab4e94db0b41" (UID: "a721247b-3436-4bb4-bc5c-ab4e94db0b41"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.916370 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a721247b-3436-4bb4-bc5c-ab4e94db0b41" (UID: "a721247b-3436-4bb4-bc5c-ab4e94db0b41"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.917038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-service-ca" (OuterVolumeSpecName: "service-ca") pod "a721247b-3436-4bb4-bc5c-ab4e94db0b41" (UID: "a721247b-3436-4bb4-bc5c-ab4e94db0b41"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.922733 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a721247b-3436-4bb4-bc5c-ab4e94db0b41" (UID: "a721247b-3436-4bb4-bc5c-ab4e94db0b41"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.923307 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a721247b-3436-4bb4-bc5c-ab4e94db0b41" (UID: "a721247b-3436-4bb4-bc5c-ab4e94db0b41"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:30:09 crc kubenswrapper[4867]: I0126 11:30:09.933142 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a721247b-3436-4bb4-bc5c-ab4e94db0b41-kube-api-access-n6lpb" (OuterVolumeSpecName: "kube-api-access-n6lpb") pod "a721247b-3436-4bb4-bc5c-ab4e94db0b41" (UID: "a721247b-3436-4bb4-bc5c-ab4e94db0b41"). InnerVolumeSpecName "kube-api-access-n6lpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.013958 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.013995 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6lpb\" (UniqueName: \"kubernetes.io/projected/a721247b-3436-4bb4-bc5c-ab4e94db0b41-kube-api-access-n6lpb\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.014007 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.014016 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.014025 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a721247b-3436-4bb4-bc5c-ab4e94db0b41-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.014034 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a721247b-3436-4bb4-bc5c-ab4e94db0b41-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.388704 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dc94j_a721247b-3436-4bb4-bc5c-ab4e94db0b41/console/0.log" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.388773 4867 generic.go:334] "Generic (PLEG): container finished" podID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" containerID="4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9" exitCode=2 Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.388817 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dc94j" event={"ID":"a721247b-3436-4bb4-bc5c-ab4e94db0b41","Type":"ContainerDied","Data":"4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9"} Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.388853 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dc94j" event={"ID":"a721247b-3436-4bb4-bc5c-ab4e94db0b41","Type":"ContainerDied","Data":"673ab4295917e01a19206667bf9dd0ba6fdff1e07bf922b7ed9174b5086d078d"} Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.388847 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dc94j" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.388873 4867 scope.go:117] "RemoveContainer" containerID="4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.426006 4867 scope.go:117] "RemoveContainer" containerID="4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.426350 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dc94j"] Jan 26 11:30:10 crc kubenswrapper[4867]: E0126 11:30:10.426692 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9\": container with ID starting with 4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9 not found: ID does not exist" containerID="4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.426743 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9"} err="failed to get container status \"4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9\": rpc error: code = NotFound desc = could not find container \"4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9\": container with ID starting with 4b9b8df891414fa75c12aaeab647daa3c346d333f5ace563708249f9392cf0e9 not found: ID does not exist" Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.431427 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dc94j"] Jan 26 11:30:10 crc kubenswrapper[4867]: I0126 11:30:10.572452 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" path="/var/lib/kubelet/pods/a721247b-3436-4bb4-bc5c-ab4e94db0b41/volumes" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.084744 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd"] Jan 26 11:30:11 crc kubenswrapper[4867]: E0126 11:30:11.085046 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d785220c-c0b5-456d-9896-b35b1ed5ce1a" containerName="collect-profiles" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.085063 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d785220c-c0b5-456d-9896-b35b1ed5ce1a" containerName="collect-profiles" Jan 26 11:30:11 crc kubenswrapper[4867]: E0126 11:30:11.085091 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" containerName="console" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.085100 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" containerName="console" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.085257 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a721247b-3436-4bb4-bc5c-ab4e94db0b41" containerName="console" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.085272 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d785220c-c0b5-456d-9896-b35b1ed5ce1a" containerName="collect-profiles" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.086320 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.090057 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.095311 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd"] Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.130506 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.130576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.130679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndj8m\" (UniqueName: \"kubernetes.io/projected/5037ef99-c48d-4c78-a3bd-d767d51ab43f-kube-api-access-ndj8m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.232805 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndj8m\" (UniqueName: \"kubernetes.io/projected/5037ef99-c48d-4c78-a3bd-d767d51ab43f-kube-api-access-ndj8m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.233193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.233368 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.233978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.234168 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.257175 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndj8m\" (UniqueName: \"kubernetes.io/projected/5037ef99-c48d-4c78-a3bd-d767d51ab43f-kube-api-access-ndj8m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.405172 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:11 crc kubenswrapper[4867]: I0126 11:30:11.648162 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd"] Jan 26 11:30:12 crc kubenswrapper[4867]: I0126 11:30:12.411011 4867 generic.go:334] "Generic (PLEG): container finished" podID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerID="7f80030134c8724ace2d7684d043251f4f42fbf611b882c220fa3852e17379be" exitCode=0 Jan 26 11:30:12 crc kubenswrapper[4867]: I0126 11:30:12.411107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" event={"ID":"5037ef99-c48d-4c78-a3bd-d767d51ab43f","Type":"ContainerDied","Data":"7f80030134c8724ace2d7684d043251f4f42fbf611b882c220fa3852e17379be"} Jan 26 11:30:12 crc kubenswrapper[4867]: I0126 11:30:12.411523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" event={"ID":"5037ef99-c48d-4c78-a3bd-d767d51ab43f","Type":"ContainerStarted","Data":"5af7a50f1c24e881c5f99af06b6abcde845ebfd9b6d7720bda26cd519a83b184"} Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.422490 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mtd2x"] Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.424007 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.443117 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtd2x"] Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.465940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-utilities\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.466131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-catalog-content\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.466358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpsgz\" (UniqueName: \"kubernetes.io/projected/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-kube-api-access-rpsgz\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.567875 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpsgz\" (UniqueName: \"kubernetes.io/projected/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-kube-api-access-rpsgz\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.567929 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-utilities\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.567954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-catalog-content\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.568544 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-catalog-content\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.568720 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-utilities\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.590341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpsgz\" (UniqueName: \"kubernetes.io/projected/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-kube-api-access-rpsgz\") pod \"redhat-operators-mtd2x\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.754854 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:13 crc kubenswrapper[4867]: I0126 11:30:13.959254 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 11:30:14 crc kubenswrapper[4867]: I0126 11:30:14.078671 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtd2x"] Jan 26 11:30:14 crc kubenswrapper[4867]: I0126 11:30:14.429275 4867 generic.go:334] "Generic (PLEG): container finished" podID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerID="c3fd956f43f4fbbaa9b5868336a254078e9bb3f1201eb541d6d09822380cc686" exitCode=0 Jan 26 11:30:14 crc kubenswrapper[4867]: I0126 11:30:14.429384 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" event={"ID":"5037ef99-c48d-4c78-a3bd-d767d51ab43f","Type":"ContainerDied","Data":"c3fd956f43f4fbbaa9b5868336a254078e9bb3f1201eb541d6d09822380cc686"} Jan 26 11:30:14 crc kubenswrapper[4867]: I0126 11:30:14.431618 4867 generic.go:334] "Generic (PLEG): container finished" podID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerID="e6033dc2bd7291d591769e08f67cf5208c424dcd852803573ee439faca86dd41" exitCode=0 Jan 26 11:30:14 crc kubenswrapper[4867]: I0126 11:30:14.431712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtd2x" event={"ID":"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab","Type":"ContainerDied","Data":"e6033dc2bd7291d591769e08f67cf5208c424dcd852803573ee439faca86dd41"} Jan 26 11:30:14 crc kubenswrapper[4867]: I0126 11:30:14.431819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtd2x" event={"ID":"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab","Type":"ContainerStarted","Data":"b3e90ab18b0b60dd7b7e72cc3576e709fbab8d6d41df80b0dddf605ec8fcfd3b"} Jan 26 11:30:15 crc kubenswrapper[4867]: I0126 11:30:15.442463 4867 generic.go:334] "Generic (PLEG): container finished" podID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerID="4119ea04352b8f48767df2f1d521afc6a2491c59675090dd4526f65a53a942b8" exitCode=0 Jan 26 11:30:15 crc kubenswrapper[4867]: I0126 11:30:15.442578 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" event={"ID":"5037ef99-c48d-4c78-a3bd-d767d51ab43f","Type":"ContainerDied","Data":"4119ea04352b8f48767df2f1d521afc6a2491c59675090dd4526f65a53a942b8"} Jan 26 11:30:15 crc kubenswrapper[4867]: I0126 11:30:15.445020 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtd2x" event={"ID":"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab","Type":"ContainerStarted","Data":"67df662835ea68cb701bcbdbe05175073900c7a1aea2482b394c5fa2362d3bf6"} Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.457355 4867 generic.go:334] "Generic (PLEG): container finished" podID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerID="67df662835ea68cb701bcbdbe05175073900c7a1aea2482b394c5fa2362d3bf6" exitCode=0 Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.457516 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtd2x" event={"ID":"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab","Type":"ContainerDied","Data":"67df662835ea68cb701bcbdbe05175073900c7a1aea2482b394c5fa2362d3bf6"} Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.767236 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.915280 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndj8m\" (UniqueName: \"kubernetes.io/projected/5037ef99-c48d-4c78-a3bd-d767d51ab43f-kube-api-access-ndj8m\") pod \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.915437 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-bundle\") pod \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.915498 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-util\") pod \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\" (UID: \"5037ef99-c48d-4c78-a3bd-d767d51ab43f\") " Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.917402 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-bundle" (OuterVolumeSpecName: "bundle") pod "5037ef99-c48d-4c78-a3bd-d767d51ab43f" (UID: "5037ef99-c48d-4c78-a3bd-d767d51ab43f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.918953 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.922670 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5037ef99-c48d-4c78-a3bd-d767d51ab43f-kube-api-access-ndj8m" (OuterVolumeSpecName: "kube-api-access-ndj8m") pod "5037ef99-c48d-4c78-a3bd-d767d51ab43f" (UID: "5037ef99-c48d-4c78-a3bd-d767d51ab43f"). InnerVolumeSpecName "kube-api-access-ndj8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:16 crc kubenswrapper[4867]: I0126 11:30:16.930290 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-util" (OuterVolumeSpecName: "util") pod "5037ef99-c48d-4c78-a3bd-d767d51ab43f" (UID: "5037ef99-c48d-4c78-a3bd-d767d51ab43f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:30:17 crc kubenswrapper[4867]: I0126 11:30:17.019633 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5037ef99-c48d-4c78-a3bd-d767d51ab43f-util\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:17 crc kubenswrapper[4867]: I0126 11:30:17.019671 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndj8m\" (UniqueName: \"kubernetes.io/projected/5037ef99-c48d-4c78-a3bd-d767d51ab43f-kube-api-access-ndj8m\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:17 crc kubenswrapper[4867]: I0126 11:30:17.466041 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" event={"ID":"5037ef99-c48d-4c78-a3bd-d767d51ab43f","Type":"ContainerDied","Data":"5af7a50f1c24e881c5f99af06b6abcde845ebfd9b6d7720bda26cd519a83b184"} Jan 26 11:30:17 crc kubenswrapper[4867]: I0126 11:30:17.466093 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af7a50f1c24e881c5f99af06b6abcde845ebfd9b6d7720bda26cd519a83b184" Jan 26 11:30:17 crc kubenswrapper[4867]: I0126 11:30:17.466169 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd" Jan 26 11:30:17 crc kubenswrapper[4867]: I0126 11:30:17.469059 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtd2x" event={"ID":"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab","Type":"ContainerStarted","Data":"391c97156a874ae13430c07c0d69ab1ce6cd8e2d8a76f5a4471d768c52b3fa4b"} Jan 26 11:30:17 crc kubenswrapper[4867]: I0126 11:30:17.494643 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mtd2x" podStartSLOduration=2.07925073 podStartE2EDuration="4.494618755s" podCreationTimestamp="2026-01-26 11:30:13 +0000 UTC" firstStartedPulling="2026-01-26 11:30:14.43347546 +0000 UTC m=+764.132050370" lastFinishedPulling="2026-01-26 11:30:16.848843455 +0000 UTC m=+766.547418395" observedRunningTime="2026-01-26 11:30:17.489364411 +0000 UTC m=+767.187939331" watchObservedRunningTime="2026-01-26 11:30:17.494618755 +0000 UTC m=+767.193193675" Jan 26 11:30:23 crc kubenswrapper[4867]: I0126 11:30:23.755514 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:23 crc kubenswrapper[4867]: I0126 11:30:23.755999 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:23 crc kubenswrapper[4867]: I0126 11:30:23.795875 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:24 crc kubenswrapper[4867]: I0126 11:30:24.555836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:25 crc kubenswrapper[4867]: I0126 11:30:25.211094 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtd2x"] Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.529496 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mtd2x" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="registry-server" containerID="cri-o://391c97156a874ae13430c07c0d69ab1ce6cd8e2d8a76f5a4471d768c52b3fa4b" gracePeriod=2 Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.603786 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn"] Jan 26 11:30:26 crc kubenswrapper[4867]: E0126 11:30:26.604524 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerName="extract" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.604539 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerName="extract" Jan 26 11:30:26 crc kubenswrapper[4867]: E0126 11:30:26.604552 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerName="pull" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.604560 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerName="pull" Jan 26 11:30:26 crc kubenswrapper[4867]: E0126 11:30:26.604568 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerName="util" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.604576 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerName="util" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.604704 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5037ef99-c48d-4c78-a3bd-d767d51ab43f" containerName="extract" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.605153 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.609327 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.609421 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.609434 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.609469 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.609492 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lkhbh" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.621602 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn"] Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.673257 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jmf\" (UniqueName: \"kubernetes.io/projected/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-kube-api-access-d2jmf\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.673334 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-apiservice-cert\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.673454 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-webhook-cert\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.774428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-webhook-cert\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.774506 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jmf\" (UniqueName: \"kubernetes.io/projected/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-kube-api-access-d2jmf\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.774533 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-apiservice-cert\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.783182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-apiservice-cert\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.786823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-webhook-cert\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.795146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jmf\" (UniqueName: \"kubernetes.io/projected/e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4-kube-api-access-d2jmf\") pod \"metallb-operator-controller-manager-b6879bdfc-xwrhn\" (UID: \"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4\") " pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:26 crc kubenswrapper[4867]: I0126 11:30:26.926096 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.020743 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5"] Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.021627 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.023879 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.024256 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.029317 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dv98h" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.031917 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5"] Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.085353 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0898e985-06ad-4cde-b358-75c0e395d72d-apiservice-cert\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.085444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72pww\" (UniqueName: \"kubernetes.io/projected/0898e985-06ad-4cde-b358-75c0e395d72d-kube-api-access-72pww\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.085568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0898e985-06ad-4cde-b358-75c0e395d72d-webhook-cert\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.189951 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0898e985-06ad-4cde-b358-75c0e395d72d-apiservice-cert\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.190422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72pww\" (UniqueName: \"kubernetes.io/projected/0898e985-06ad-4cde-b358-75c0e395d72d-kube-api-access-72pww\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.190460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0898e985-06ad-4cde-b358-75c0e395d72d-webhook-cert\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.196076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0898e985-06ad-4cde-b358-75c0e395d72d-apiservice-cert\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.196635 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0898e985-06ad-4cde-b358-75c0e395d72d-webhook-cert\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.212517 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72pww\" (UniqueName: \"kubernetes.io/projected/0898e985-06ad-4cde-b358-75c0e395d72d-kube-api-access-72pww\") pod \"metallb-operator-webhook-server-5cbc548b4-c9cg5\" (UID: \"0898e985-06ad-4cde-b358-75c0e395d72d\") " pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.320997 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn"] Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.344658 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.536449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" event={"ID":"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4","Type":"ContainerStarted","Data":"3d3646b09bb6cea55c47dc38cb5ac9102fffc36cf2f1cd179f75218e80333d5f"} Jan 26 11:30:27 crc kubenswrapper[4867]: I0126 11:30:27.564330 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5"] Jan 26 11:30:28 crc kubenswrapper[4867]: I0126 11:30:28.543978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" event={"ID":"0898e985-06ad-4cde-b358-75c0e395d72d","Type":"ContainerStarted","Data":"f8fd2d9d022b0fa927b6fbf6a9ace7e946a9f2b9a7cb05edc4a12ae9fd563f61"} Jan 26 11:30:29 crc kubenswrapper[4867]: I0126 11:30:29.553404 4867 generic.go:334] "Generic (PLEG): container finished" podID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerID="391c97156a874ae13430c07c0d69ab1ce6cd8e2d8a76f5a4471d768c52b3fa4b" exitCode=0 Jan 26 11:30:29 crc kubenswrapper[4867]: I0126 11:30:29.553474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtd2x" event={"ID":"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab","Type":"ContainerDied","Data":"391c97156a874ae13430c07c0d69ab1ce6cd8e2d8a76f5a4471d768c52b3fa4b"} Jan 26 11:30:30 crc kubenswrapper[4867]: I0126 11:30:30.845724 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:30 crc kubenswrapper[4867]: I0126 11:30:30.951579 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-catalog-content\") pod \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " Jan 26 11:30:30 crc kubenswrapper[4867]: I0126 11:30:30.951651 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-utilities\") pod \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " Jan 26 11:30:30 crc kubenswrapper[4867]: I0126 11:30:30.951694 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpsgz\" (UniqueName: \"kubernetes.io/projected/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-kube-api-access-rpsgz\") pod \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\" (UID: \"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab\") " Jan 26 11:30:30 crc kubenswrapper[4867]: I0126 11:30:30.961410 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-utilities" (OuterVolumeSpecName: "utilities") pod "89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" (UID: "89dc1b40-b5fb-455b-9eef-d48ab8a7bdab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:30:30 crc kubenswrapper[4867]: I0126 11:30:30.976411 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-kube-api-access-rpsgz" (OuterVolumeSpecName: "kube-api-access-rpsgz") pod "89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" (UID: "89dc1b40-b5fb-455b-9eef-d48ab8a7bdab"). InnerVolumeSpecName "kube-api-access-rpsgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.053600 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.053651 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpsgz\" (UniqueName: \"kubernetes.io/projected/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-kube-api-access-rpsgz\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.086364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" (UID: "89dc1b40-b5fb-455b-9eef-d48ab8a7bdab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.156155 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.573071 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtd2x" event={"ID":"89dc1b40-b5fb-455b-9eef-d48ab8a7bdab","Type":"ContainerDied","Data":"b3e90ab18b0b60dd7b7e72cc3576e709fbab8d6d41df80b0dddf605ec8fcfd3b"} Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.573145 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtd2x" Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.573165 4867 scope.go:117] "RemoveContainer" containerID="391c97156a874ae13430c07c0d69ab1ce6cd8e2d8a76f5a4471d768c52b3fa4b" Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.605693 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtd2x"] Jan 26 11:30:31 crc kubenswrapper[4867]: I0126 11:30:31.608704 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mtd2x"] Jan 26 11:30:32 crc kubenswrapper[4867]: I0126 11:30:32.128963 4867 scope.go:117] "RemoveContainer" containerID="67df662835ea68cb701bcbdbe05175073900c7a1aea2482b394c5fa2362d3bf6" Jan 26 11:30:32 crc kubenswrapper[4867]: I0126 11:30:32.571166 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" path="/var/lib/kubelet/pods/89dc1b40-b5fb-455b-9eef-d48ab8a7bdab/volumes" Jan 26 11:30:33 crc kubenswrapper[4867]: I0126 11:30:33.677584 4867 scope.go:117] "RemoveContainer" containerID="e6033dc2bd7291d591769e08f67cf5208c424dcd852803573ee439faca86dd41" Jan 26 11:30:34 crc kubenswrapper[4867]: I0126 11:30:34.613974 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" event={"ID":"0898e985-06ad-4cde-b358-75c0e395d72d","Type":"ContainerStarted","Data":"6edbe489feef5ca984e9a788c72fca730d4adfc1b3f46549c7310f9e4e79c94b"} Jan 26 11:30:34 crc kubenswrapper[4867]: I0126 11:30:34.615356 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:30:34 crc kubenswrapper[4867]: I0126 11:30:34.618766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" event={"ID":"e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4","Type":"ContainerStarted","Data":"1da0266e3f06807ab43bfe285970a42ab57c2230c0983aa98b483ffe467e1a40"} Jan 26 11:30:34 crc kubenswrapper[4867]: I0126 11:30:34.618957 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:30:34 crc kubenswrapper[4867]: I0126 11:30:34.645253 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" podStartSLOduration=2.515015774 podStartE2EDuration="8.645213907s" podCreationTimestamp="2026-01-26 11:30:26 +0000 UTC" firstStartedPulling="2026-01-26 11:30:27.579809813 +0000 UTC m=+777.278384723" lastFinishedPulling="2026-01-26 11:30:33.710007946 +0000 UTC m=+783.408582856" observedRunningTime="2026-01-26 11:30:34.637593972 +0000 UTC m=+784.336168882" watchObservedRunningTime="2026-01-26 11:30:34.645213907 +0000 UTC m=+784.343788817" Jan 26 11:30:34 crc kubenswrapper[4867]: I0126 11:30:34.657739 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" podStartSLOduration=2.2868668 podStartE2EDuration="8.657712879s" podCreationTimestamp="2026-01-26 11:30:26 +0000 UTC" firstStartedPulling="2026-01-26 11:30:27.336393429 +0000 UTC m=+777.034968339" lastFinishedPulling="2026-01-26 11:30:33.707239508 +0000 UTC m=+783.405814418" observedRunningTime="2026-01-26 11:30:34.654975361 +0000 UTC m=+784.353550281" watchObservedRunningTime="2026-01-26 11:30:34.657712879 +0000 UTC m=+784.356287789" Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.294514 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.294602 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.294665 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.295503 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d80268128b8588b5243ae8da874837feaca71a462cb1a50fe2432786b4b83de"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.295568 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://3d80268128b8588b5243ae8da874837feaca71a462cb1a50fe2432786b4b83de" gracePeriod=600 Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.634627 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="3d80268128b8588b5243ae8da874837feaca71a462cb1a50fe2432786b4b83de" exitCode=0 Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.634831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"3d80268128b8588b5243ae8da874837feaca71a462cb1a50fe2432786b4b83de"} Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.635253 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"f4568ef927141a7a2944fe130fff11fd99ada292de5ff857f1ccce612a5d941d"} Jan 26 11:30:36 crc kubenswrapper[4867]: I0126 11:30:36.635287 4867 scope.go:117] "RemoveContainer" containerID="0fe8ca3d314e4d17df3b97806d9aca627e634096754401de141f98cba0b737ca" Jan 26 11:30:47 crc kubenswrapper[4867]: I0126 11:30:47.352760 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5cbc548b4-c9cg5" Jan 26 11:31:06 crc kubenswrapper[4867]: I0126 11:31:06.929495 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b6879bdfc-xwrhn" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.798282 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fdvhb"] Jan 26 11:31:07 crc kubenswrapper[4867]: E0126 11:31:07.798941 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="extract-utilities" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.799013 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="extract-utilities" Jan 26 11:31:07 crc kubenswrapper[4867]: E0126 11:31:07.799077 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="registry-server" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.799335 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="registry-server" Jan 26 11:31:07 crc kubenswrapper[4867]: E0126 11:31:07.799453 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="extract-content" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.799521 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="extract-content" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.799727 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="89dc1b40-b5fb-455b-9eef-d48ab8a7bdab" containerName="registry-server" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.802289 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.806072 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n56rp" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.806368 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.806490 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.834415 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn"] Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.835405 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.840559 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.848893 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn"] Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.883795 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xzzx4"] Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.885092 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xzzx4" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.889376 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.889791 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.889961 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xw5pc" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.890121 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.902141 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-496nf"] Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903535 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-conf\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqj9q\" (UniqueName: \"kubernetes.io/projected/a5badbe6-91c6-424f-b422-df4fe4761e26-kube-api-access-cqj9q\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903725 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-reloader\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903754 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-startup\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903791 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-sockets\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics-certs\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfbp2\" (UniqueName: \"kubernetes.io/projected/7d39a9a1-98f9-4404-a415-867570383af9-kube-api-access-pfbp2\") pod \"frr-k8s-webhook-server-7df86c4f6c-mtgxn\" (UID: \"7d39a9a1-98f9-4404-a415-867570383af9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.903864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d39a9a1-98f9-4404-a415-867570383af9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mtgxn\" (UID: \"7d39a9a1-98f9-4404-a415-867570383af9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.908793 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 11:31:07 crc kubenswrapper[4867]: I0126 11:31:07.927406 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-496nf"] Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d39a9a1-98f9-4404-a415-867570383af9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mtgxn\" (UID: \"7d39a9a1-98f9-4404-a415-867570383af9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mgzl\" (UniqueName: \"kubernetes.io/projected/29fc757d-2542-48c5-bea3-05ff023baa05-kube-api-access-2mgzl\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-conf\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005785 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxclp\" (UniqueName: \"kubernetes.io/projected/6e82409c-e6fc-4a6b-964f-95fee3ed959d-kube-api-access-cxclp\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005828 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqj9q\" (UniqueName: \"kubernetes.io/projected/a5badbe6-91c6-424f-b422-df4fe4761e26-kube-api-access-cqj9q\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/29fc757d-2542-48c5-bea3-05ff023baa05-metallb-excludel2\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005899 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-metrics-certs\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005918 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-reloader\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e82409c-e6fc-4a6b-964f-95fee3ed959d-cert\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-startup\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.005991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-sockets\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.006012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics-certs\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.006030 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e82409c-e6fc-4a6b-964f-95fee3ed959d-metrics-certs\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.006057 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfbp2\" (UniqueName: \"kubernetes.io/projected/7d39a9a1-98f9-4404-a415-867570383af9-kube-api-access-pfbp2\") pod \"frr-k8s-webhook-server-7df86c4f6c-mtgxn\" (UID: \"7d39a9a1-98f9-4404-a415-867570383af9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.006368 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-conf\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: E0126 11:31:08.006544 4867 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 26 11:31:08 crc kubenswrapper[4867]: E0126 11:31:08.006619 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics-certs podName:a5badbe6-91c6-424f-b422-df4fe4761e26 nodeName:}" failed. No retries permitted until 2026-01-26 11:31:08.506588375 +0000 UTC m=+818.205163285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics-certs") pod "frr-k8s-fdvhb" (UID: "a5badbe6-91c6-424f-b422-df4fe4761e26") : secret "frr-k8s-certs-secret" not found Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.006729 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.006921 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-sockets\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.007314 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a5badbe6-91c6-424f-b422-df4fe4761e26-reloader\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.007370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a5badbe6-91c6-424f-b422-df4fe4761e26-frr-startup\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.016144 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d39a9a1-98f9-4404-a415-867570383af9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mtgxn\" (UID: \"7d39a9a1-98f9-4404-a415-867570383af9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.023826 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqj9q\" (UniqueName: \"kubernetes.io/projected/a5badbe6-91c6-424f-b422-df4fe4761e26-kube-api-access-cqj9q\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.024028 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfbp2\" (UniqueName: \"kubernetes.io/projected/7d39a9a1-98f9-4404-a415-867570383af9-kube-api-access-pfbp2\") pod \"frr-k8s-webhook-server-7df86c4f6c-mtgxn\" (UID: \"7d39a9a1-98f9-4404-a415-867570383af9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.106954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-metrics-certs\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.107014 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e82409c-e6fc-4a6b-964f-95fee3ed959d-cert\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.107055 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e82409c-e6fc-4a6b-964f-95fee3ed959d-metrics-certs\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.107105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mgzl\" (UniqueName: \"kubernetes.io/projected/29fc757d-2542-48c5-bea3-05ff023baa05-kube-api-access-2mgzl\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.107130 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxclp\" (UniqueName: \"kubernetes.io/projected/6e82409c-e6fc-4a6b-964f-95fee3ed959d-kube-api-access-cxclp\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.107156 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.107180 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/29fc757d-2542-48c5-bea3-05ff023baa05-metallb-excludel2\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: E0126 11:31:08.107573 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 11:31:08 crc kubenswrapper[4867]: E0126 11:31:08.107671 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist podName:29fc757d-2542-48c5-bea3-05ff023baa05 nodeName:}" failed. No retries permitted until 2026-01-26 11:31:08.607648471 +0000 UTC m=+818.306223381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist") pod "speaker-xzzx4" (UID: "29fc757d-2542-48c5-bea3-05ff023baa05") : secret "metallb-memberlist" not found Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.108118 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/29fc757d-2542-48c5-bea3-05ff023baa05-metallb-excludel2\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.110248 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.111357 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-metrics-certs\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.118991 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e82409c-e6fc-4a6b-964f-95fee3ed959d-metrics-certs\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.121438 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e82409c-e6fc-4a6b-964f-95fee3ed959d-cert\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.130523 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mgzl\" (UniqueName: \"kubernetes.io/projected/29fc757d-2542-48c5-bea3-05ff023baa05-kube-api-access-2mgzl\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.131463 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxclp\" (UniqueName: \"kubernetes.io/projected/6e82409c-e6fc-4a6b-964f-95fee3ed959d-kube-api-access-cxclp\") pod \"controller-6968d8fdc4-496nf\" (UID: \"6e82409c-e6fc-4a6b-964f-95fee3ed959d\") " pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.155748 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.220447 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.516835 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics-certs\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.527323 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5badbe6-91c6-424f-b422-df4fe4761e26-metrics-certs\") pod \"frr-k8s-fdvhb\" (UID: \"a5badbe6-91c6-424f-b422-df4fe4761e26\") " pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.618598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:08 crc kubenswrapper[4867]: E0126 11:31:08.619798 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 11:31:08 crc kubenswrapper[4867]: E0126 11:31:08.619848 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist podName:29fc757d-2542-48c5-bea3-05ff023baa05 nodeName:}" failed. No retries permitted until 2026-01-26 11:31:09.619832548 +0000 UTC m=+819.318407458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist") pod "speaker-xzzx4" (UID: "29fc757d-2542-48c5-bea3-05ff023baa05") : secret "metallb-memberlist" not found Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.640206 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn"] Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.713095 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-496nf"] Jan 26 11:31:08 crc kubenswrapper[4867]: W0126 11:31:08.714280 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e82409c_e6fc_4a6b_964f_95fee3ed959d.slice/crio-44bde32043a5ea46fa4d5373153902569b9eb1e202bdb329d59f2e0c08160576 WatchSource:0}: Error finding container 44bde32043a5ea46fa4d5373153902569b9eb1e202bdb329d59f2e0c08160576: Status 404 returned error can't find the container with id 44bde32043a5ea46fa4d5373153902569b9eb1e202bdb329d59f2e0c08160576 Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.735301 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.856124 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" event={"ID":"7d39a9a1-98f9-4404-a415-867570383af9","Type":"ContainerStarted","Data":"35cba0b4dcbb1530cbbb913b8c902a19d640db0095c23f8f23efcb2a6df22073"} Jan 26 11:31:08 crc kubenswrapper[4867]: I0126 11:31:08.859637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-496nf" event={"ID":"6e82409c-e6fc-4a6b-964f-95fee3ed959d","Type":"ContainerStarted","Data":"44bde32043a5ea46fa4d5373153902569b9eb1e202bdb329d59f2e0c08160576"} Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.632589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.639965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29fc757d-2542-48c5-bea3-05ff023baa05-memberlist\") pod \"speaker-xzzx4\" (UID: \"29fc757d-2542-48c5-bea3-05ff023baa05\") " pod="metallb-system/speaker-xzzx4" Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.703420 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xzzx4" Jan 26 11:31:09 crc kubenswrapper[4867]: W0126 11:31:09.752832 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29fc757d_2542_48c5_bea3_05ff023baa05.slice/crio-0a720d29fe64176b59bb1ddf333e8ad787b141fe1ee401bf4a59b9d6bafe17a2 WatchSource:0}: Error finding container 0a720d29fe64176b59bb1ddf333e8ad787b141fe1ee401bf4a59b9d6bafe17a2: Status 404 returned error can't find the container with id 0a720d29fe64176b59bb1ddf333e8ad787b141fe1ee401bf4a59b9d6bafe17a2 Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.874657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xzzx4" event={"ID":"29fc757d-2542-48c5-bea3-05ff023baa05","Type":"ContainerStarted","Data":"0a720d29fe64176b59bb1ddf333e8ad787b141fe1ee401bf4a59b9d6bafe17a2"} Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.879466 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerStarted","Data":"2c8868c5924a7ca41c652c6d25a73b52e7bd3b5b89ff9cea98ec3860db1b27c0"} Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.889743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-496nf" event={"ID":"6e82409c-e6fc-4a6b-964f-95fee3ed959d","Type":"ContainerStarted","Data":"2e66c47ce77a9e2af657af466a7b0a06f6029026c4a7ef95fba2cb487a7fe846"} Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.889805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-496nf" event={"ID":"6e82409c-e6fc-4a6b-964f-95fee3ed959d","Type":"ContainerStarted","Data":"bcf890645f5354481c66459d45abf961d274b2326fc09bc3fa289707b239d820"} Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.891149 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:09 crc kubenswrapper[4867]: I0126 11:31:09.929846 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-496nf" podStartSLOduration=2.929819734 podStartE2EDuration="2.929819734s" podCreationTimestamp="2026-01-26 11:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:31:09.925122322 +0000 UTC m=+819.623697232" watchObservedRunningTime="2026-01-26 11:31:09.929819734 +0000 UTC m=+819.628394644" Jan 26 11:31:10 crc kubenswrapper[4867]: I0126 11:31:10.907490 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xzzx4" event={"ID":"29fc757d-2542-48c5-bea3-05ff023baa05","Type":"ContainerStarted","Data":"c27e687004b33986186e21ca464b437f2a69b89d136ad9549541cb935a9ac424"} Jan 26 11:31:10 crc kubenswrapper[4867]: I0126 11:31:10.908049 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xzzx4" event={"ID":"29fc757d-2542-48c5-bea3-05ff023baa05","Type":"ContainerStarted","Data":"9d4eaea23234dc9a526a58f08108ba56c7f115881bba8a44dac62813a62a5332"} Jan 26 11:31:10 crc kubenswrapper[4867]: I0126 11:31:10.931670 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xzzx4" podStartSLOduration=3.931639642 podStartE2EDuration="3.931639642s" podCreationTimestamp="2026-01-26 11:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:31:10.928648427 +0000 UTC m=+820.627223337" watchObservedRunningTime="2026-01-26 11:31:10.931639642 +0000 UTC m=+820.630214552" Jan 26 11:31:11 crc kubenswrapper[4867]: I0126 11:31:11.918580 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xzzx4" Jan 26 11:31:16 crc kubenswrapper[4867]: I0126 11:31:16.960680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" event={"ID":"7d39a9a1-98f9-4404-a415-867570383af9","Type":"ContainerStarted","Data":"231650d6b9f5e311d821eeae850bf415e11e7856ac3974e5f13dba8bb1822542"} Jan 26 11:31:16 crc kubenswrapper[4867]: I0126 11:31:16.961822 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:16 crc kubenswrapper[4867]: I0126 11:31:16.963255 4867 generic.go:334] "Generic (PLEG): container finished" podID="a5badbe6-91c6-424f-b422-df4fe4761e26" containerID="a37d881bb96bdde319610133ca4db29f86d0edff89bb57b568a52c80d21e21a1" exitCode=0 Jan 26 11:31:16 crc kubenswrapper[4867]: I0126 11:31:16.963328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerDied","Data":"a37d881bb96bdde319610133ca4db29f86d0edff89bb57b568a52c80d21e21a1"} Jan 26 11:31:16 crc kubenswrapper[4867]: I0126 11:31:16.987056 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" podStartSLOduration=3.111339547 podStartE2EDuration="9.987026447s" podCreationTimestamp="2026-01-26 11:31:07 +0000 UTC" firstStartedPulling="2026-01-26 11:31:08.65080125 +0000 UTC m=+818.349376160" lastFinishedPulling="2026-01-26 11:31:15.52648815 +0000 UTC m=+825.225063060" observedRunningTime="2026-01-26 11:31:16.985096153 +0000 UTC m=+826.683671123" watchObservedRunningTime="2026-01-26 11:31:16.987026447 +0000 UTC m=+826.685601397" Jan 26 11:31:17 crc kubenswrapper[4867]: I0126 11:31:17.972111 4867 generic.go:334] "Generic (PLEG): container finished" podID="a5badbe6-91c6-424f-b422-df4fe4761e26" containerID="3dd2bc1a8fd8be1c272b15f79ebc34038df066eb8567ae6fe947dfcbbd32b725" exitCode=0 Jan 26 11:31:17 crc kubenswrapper[4867]: I0126 11:31:17.973314 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerDied","Data":"3dd2bc1a8fd8be1c272b15f79ebc34038df066eb8567ae6fe947dfcbbd32b725"} Jan 26 11:31:18 crc kubenswrapper[4867]: I0126 11:31:18.225113 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-496nf" Jan 26 11:31:18 crc kubenswrapper[4867]: I0126 11:31:18.981839 4867 generic.go:334] "Generic (PLEG): container finished" podID="a5badbe6-91c6-424f-b422-df4fe4761e26" containerID="b3560942617a5484b41171a8012288b4097b542bbd9a248e2cbed63f4e294df7" exitCode=0 Jan 26 11:31:18 crc kubenswrapper[4867]: I0126 11:31:18.981898 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerDied","Data":"b3560942617a5484b41171a8012288b4097b542bbd9a248e2cbed63f4e294df7"} Jan 26 11:31:19 crc kubenswrapper[4867]: I0126 11:31:19.994144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerStarted","Data":"f4ab5e5f89a6c1bf4ccd33c1a2a2483deb62af3abe91911a22cd67081293698e"} Jan 26 11:31:19 crc kubenswrapper[4867]: I0126 11:31:19.994806 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerStarted","Data":"598f2ba2761ced21a23002c2724182884b554e3e0d80122110dcc2b507db3082"} Jan 26 11:31:19 crc kubenswrapper[4867]: I0126 11:31:19.994823 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerStarted","Data":"6673dc20a81287be4cb69be0f5c4db22cad7ddb11042f2be7a49275267083155"} Jan 26 11:31:19 crc kubenswrapper[4867]: I0126 11:31:19.994834 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerStarted","Data":"7546223aa9e0ecf9ef4ceeea8b6b9ddebf051e2e6e51af5db968fcb8f00ed321"} Jan 26 11:31:19 crc kubenswrapper[4867]: I0126 11:31:19.994846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerStarted","Data":"fe358107e4309cbf6a22bede31832f029511a192a308dd53ef77e34117b463d0"} Jan 26 11:31:21 crc kubenswrapper[4867]: I0126 11:31:21.020566 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fdvhb" event={"ID":"a5badbe6-91c6-424f-b422-df4fe4761e26","Type":"ContainerStarted","Data":"b9ae752d8eb19ec3365d62026470f7f40cb5207bf1793ddea408d8ec55a36a00"} Jan 26 11:31:21 crc kubenswrapper[4867]: I0126 11:31:21.021063 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:21 crc kubenswrapper[4867]: I0126 11:31:21.052822 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fdvhb" podStartSLOduration=7.357797623 podStartE2EDuration="14.052800035s" podCreationTimestamp="2026-01-26 11:31:07 +0000 UTC" firstStartedPulling="2026-01-26 11:31:08.853448127 +0000 UTC m=+818.552023037" lastFinishedPulling="2026-01-26 11:31:15.548450539 +0000 UTC m=+825.247025449" observedRunningTime="2026-01-26 11:31:21.047650819 +0000 UTC m=+830.746225749" watchObservedRunningTime="2026-01-26 11:31:21.052800035 +0000 UTC m=+830.751374965" Jan 26 11:31:23 crc kubenswrapper[4867]: I0126 11:31:23.736557 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:23 crc kubenswrapper[4867]: I0126 11:31:23.776145 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:28 crc kubenswrapper[4867]: I0126 11:31:28.161601 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mtgxn" Jan 26 11:31:29 crc kubenswrapper[4867]: I0126 11:31:29.708790 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xzzx4" Jan 26 11:31:32 crc kubenswrapper[4867]: I0126 11:31:32.798242 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-h74mj"] Jan 26 11:31:32 crc kubenswrapper[4867]: I0126 11:31:32.799870 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h74mj" Jan 26 11:31:32 crc kubenswrapper[4867]: I0126 11:31:32.802971 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 11:31:32 crc kubenswrapper[4867]: I0126 11:31:32.803160 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fkn9f" Jan 26 11:31:32 crc kubenswrapper[4867]: I0126 11:31:32.803380 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 11:31:32 crc kubenswrapper[4867]: I0126 11:31:32.844649 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h74mj"] Jan 26 11:31:32 crc kubenswrapper[4867]: I0126 11:31:32.910504 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c5jd\" (UniqueName: \"kubernetes.io/projected/43314fb7-ba1a-43fe-a158-02c3bb256846-kube-api-access-2c5jd\") pod \"openstack-operator-index-h74mj\" (UID: \"43314fb7-ba1a-43fe-a158-02c3bb256846\") " pod="openstack-operators/openstack-operator-index-h74mj" Jan 26 11:31:33 crc kubenswrapper[4867]: I0126 11:31:33.012159 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c5jd\" (UniqueName: \"kubernetes.io/projected/43314fb7-ba1a-43fe-a158-02c3bb256846-kube-api-access-2c5jd\") pod \"openstack-operator-index-h74mj\" (UID: \"43314fb7-ba1a-43fe-a158-02c3bb256846\") " pod="openstack-operators/openstack-operator-index-h74mj" Jan 26 11:31:33 crc kubenswrapper[4867]: I0126 11:31:33.032389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c5jd\" (UniqueName: \"kubernetes.io/projected/43314fb7-ba1a-43fe-a158-02c3bb256846-kube-api-access-2c5jd\") pod \"openstack-operator-index-h74mj\" (UID: \"43314fb7-ba1a-43fe-a158-02c3bb256846\") " pod="openstack-operators/openstack-operator-index-h74mj" Jan 26 11:31:33 crc kubenswrapper[4867]: I0126 11:31:33.120476 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h74mj" Jan 26 11:31:33 crc kubenswrapper[4867]: I0126 11:31:33.561449 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h74mj"] Jan 26 11:31:33 crc kubenswrapper[4867]: W0126 11:31:33.572650 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43314fb7_ba1a_43fe_a158_02c3bb256846.slice/crio-bb66cc4bcdcdf98f9c15126d2ba609b997615b5897a76550a26d7b20bf0d3c0b WatchSource:0}: Error finding container bb66cc4bcdcdf98f9c15126d2ba609b997615b5897a76550a26d7b20bf0d3c0b: Status 404 returned error can't find the container with id bb66cc4bcdcdf98f9c15126d2ba609b997615b5897a76550a26d7b20bf0d3c0b Jan 26 11:31:34 crc kubenswrapper[4867]: I0126 11:31:34.115549 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h74mj" event={"ID":"43314fb7-ba1a-43fe-a158-02c3bb256846","Type":"ContainerStarted","Data":"bb66cc4bcdcdf98f9c15126d2ba609b997615b5897a76550a26d7b20bf0d3c0b"} Jan 26 11:31:37 crc kubenswrapper[4867]: I0126 11:31:37.353067 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h74mj"] Jan 26 11:31:37 crc kubenswrapper[4867]: I0126 11:31:37.959546 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8swqj"] Jan 26 11:31:37 crc kubenswrapper[4867]: I0126 11:31:37.960456 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:37 crc kubenswrapper[4867]: I0126 11:31:37.974302 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8swqj"] Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.105674 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52psm\" (UniqueName: \"kubernetes.io/projected/c26c3c2d-f71f-4cef-ab83-6f69da85606a-kube-api-access-52psm\") pod \"openstack-operator-index-8swqj\" (UID: \"c26c3c2d-f71f-4cef-ab83-6f69da85606a\") " pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.164698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h74mj" event={"ID":"43314fb7-ba1a-43fe-a158-02c3bb256846","Type":"ContainerStarted","Data":"abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b"} Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.164818 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-h74mj" podUID="43314fb7-ba1a-43fe-a158-02c3bb256846" containerName="registry-server" containerID="cri-o://abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b" gracePeriod=2 Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.180938 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-h74mj" podStartSLOduration=2.066752936 podStartE2EDuration="6.180905196s" podCreationTimestamp="2026-01-26 11:31:32 +0000 UTC" firstStartedPulling="2026-01-26 11:31:33.57551221 +0000 UTC m=+843.274087130" lastFinishedPulling="2026-01-26 11:31:37.68966447 +0000 UTC m=+847.388239390" observedRunningTime="2026-01-26 11:31:38.179184837 +0000 UTC m=+847.877759747" watchObservedRunningTime="2026-01-26 11:31:38.180905196 +0000 UTC m=+847.879480116" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.207723 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52psm\" (UniqueName: \"kubernetes.io/projected/c26c3c2d-f71f-4cef-ab83-6f69da85606a-kube-api-access-52psm\") pod \"openstack-operator-index-8swqj\" (UID: \"c26c3c2d-f71f-4cef-ab83-6f69da85606a\") " pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.228860 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52psm\" (UniqueName: \"kubernetes.io/projected/c26c3c2d-f71f-4cef-ab83-6f69da85606a-kube-api-access-52psm\") pod \"openstack-operator-index-8swqj\" (UID: \"c26c3c2d-f71f-4cef-ab83-6f69da85606a\") " pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.278343 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.496066 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h74mj" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.613351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c5jd\" (UniqueName: \"kubernetes.io/projected/43314fb7-ba1a-43fe-a158-02c3bb256846-kube-api-access-2c5jd\") pod \"43314fb7-ba1a-43fe-a158-02c3bb256846\" (UID: \"43314fb7-ba1a-43fe-a158-02c3bb256846\") " Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.618038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43314fb7-ba1a-43fe-a158-02c3bb256846-kube-api-access-2c5jd" (OuterVolumeSpecName: "kube-api-access-2c5jd") pod "43314fb7-ba1a-43fe-a158-02c3bb256846" (UID: "43314fb7-ba1a-43fe-a158-02c3bb256846"). InnerVolumeSpecName "kube-api-access-2c5jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.716139 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c5jd\" (UniqueName: \"kubernetes.io/projected/43314fb7-ba1a-43fe-a158-02c3bb256846-kube-api-access-2c5jd\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.720827 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8swqj"] Jan 26 11:31:38 crc kubenswrapper[4867]: W0126 11:31:38.728754 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc26c3c2d_f71f_4cef_ab83_6f69da85606a.slice/crio-4dc2f091258826f593b2e732d95c39e2b5987f884a5b82dd2fc434463b0e3fde WatchSource:0}: Error finding container 4dc2f091258826f593b2e732d95c39e2b5987f884a5b82dd2fc434463b0e3fde: Status 404 returned error can't find the container with id 4dc2f091258826f593b2e732d95c39e2b5987f884a5b82dd2fc434463b0e3fde Jan 26 11:31:38 crc kubenswrapper[4867]: I0126 11:31:38.738780 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fdvhb" Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.173450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8swqj" event={"ID":"c26c3c2d-f71f-4cef-ab83-6f69da85606a","Type":"ContainerStarted","Data":"b79dbda0de882f68908702e2df182db8b4715818ba67b73a06edf72f22c6d91c"} Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.174489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8swqj" event={"ID":"c26c3c2d-f71f-4cef-ab83-6f69da85606a","Type":"ContainerStarted","Data":"4dc2f091258826f593b2e732d95c39e2b5987f884a5b82dd2fc434463b0e3fde"} Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.175714 4867 generic.go:334] "Generic (PLEG): container finished" podID="43314fb7-ba1a-43fe-a158-02c3bb256846" containerID="abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b" exitCode=0 Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.175764 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h74mj" Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.175767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h74mj" event={"ID":"43314fb7-ba1a-43fe-a158-02c3bb256846","Type":"ContainerDied","Data":"abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b"} Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.175849 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h74mj" event={"ID":"43314fb7-ba1a-43fe-a158-02c3bb256846","Type":"ContainerDied","Data":"bb66cc4bcdcdf98f9c15126d2ba609b997615b5897a76550a26d7b20bf0d3c0b"} Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.175893 4867 scope.go:117] "RemoveContainer" containerID="abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b" Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.190879 4867 scope.go:117] "RemoveContainer" containerID="abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b" Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.191292 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8swqj" podStartSLOduration=2.136916103 podStartE2EDuration="2.191271774s" podCreationTimestamp="2026-01-26 11:31:37 +0000 UTC" firstStartedPulling="2026-01-26 11:31:38.736440843 +0000 UTC m=+848.435015753" lastFinishedPulling="2026-01-26 11:31:38.790796504 +0000 UTC m=+848.489371424" observedRunningTime="2026-01-26 11:31:39.190487061 +0000 UTC m=+848.889061981" watchObservedRunningTime="2026-01-26 11:31:39.191271774 +0000 UTC m=+848.889846684" Jan 26 11:31:39 crc kubenswrapper[4867]: E0126 11:31:39.196078 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b\": container with ID starting with abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b not found: ID does not exist" containerID="abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b" Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.196136 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b"} err="failed to get container status \"abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b\": rpc error: code = NotFound desc = could not find container \"abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b\": container with ID starting with abdce7f4e41af155696ee1fae63aa895ce4e6e68494aefcf4c1cf33429357d3b not found: ID does not exist" Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.214013 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h74mj"] Jan 26 11:31:39 crc kubenswrapper[4867]: I0126 11:31:39.217821 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-h74mj"] Jan 26 11:31:40 crc kubenswrapper[4867]: I0126 11:31:40.586352 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43314fb7-ba1a-43fe-a158-02c3bb256846" path="/var/lib/kubelet/pods/43314fb7-ba1a-43fe-a158-02c3bb256846/volumes" Jan 26 11:31:48 crc kubenswrapper[4867]: I0126 11:31:48.278664 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:48 crc kubenswrapper[4867]: I0126 11:31:48.279824 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:48 crc kubenswrapper[4867]: I0126 11:31:48.309947 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:49 crc kubenswrapper[4867]: I0126 11:31:49.268511 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8swqj" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.612730 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd"] Jan 26 11:31:54 crc kubenswrapper[4867]: E0126 11:31:54.613852 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43314fb7-ba1a-43fe-a158-02c3bb256846" containerName="registry-server" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.613868 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="43314fb7-ba1a-43fe-a158-02c3bb256846" containerName="registry-server" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.613989 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="43314fb7-ba1a-43fe-a158-02c3bb256846" containerName="registry-server" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.614933 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.622532 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rgxlz" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.624054 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd"] Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.744561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-util\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.744614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t28c\" (UniqueName: \"kubernetes.io/projected/81541a17-1078-4ebe-b702-4d95a4ae8771-kube-api-access-9t28c\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.744672 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-bundle\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.845636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-util\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.845691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t28c\" (UniqueName: \"kubernetes.io/projected/81541a17-1078-4ebe-b702-4d95a4ae8771-kube-api-access-9t28c\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.845721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-bundle\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.846347 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-util\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.846362 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-bundle\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.866954 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t28c\" (UniqueName: \"kubernetes.io/projected/81541a17-1078-4ebe-b702-4d95a4ae8771-kube-api-access-9t28c\") pod \"72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:54 crc kubenswrapper[4867]: I0126 11:31:54.933333 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:55 crc kubenswrapper[4867]: I0126 11:31:55.148080 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd"] Jan 26 11:31:55 crc kubenswrapper[4867]: I0126 11:31:55.284643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" event={"ID":"81541a17-1078-4ebe-b702-4d95a4ae8771","Type":"ContainerStarted","Data":"043048062c735323f9a47a83eee2dacdcbd46a46f60837793e5af245156afd9a"} Jan 26 11:31:56 crc kubenswrapper[4867]: I0126 11:31:56.292570 4867 generic.go:334] "Generic (PLEG): container finished" podID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerID="f66b3b8ab8b601180864a072eb2e4545a3f708c56adfdbb448759980a2a9294c" exitCode=0 Jan 26 11:31:56 crc kubenswrapper[4867]: I0126 11:31:56.292628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" event={"ID":"81541a17-1078-4ebe-b702-4d95a4ae8771","Type":"ContainerDied","Data":"f66b3b8ab8b601180864a072eb2e4545a3f708c56adfdbb448759980a2a9294c"} Jan 26 11:31:57 crc kubenswrapper[4867]: I0126 11:31:57.304729 4867 generic.go:334] "Generic (PLEG): container finished" podID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerID="11dd0db432534dbcdc1c6f3f059480103fe6476bfa515b01bf1e2aa29bc8aaf2" exitCode=0 Jan 26 11:31:57 crc kubenswrapper[4867]: I0126 11:31:57.304794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" event={"ID":"81541a17-1078-4ebe-b702-4d95a4ae8771","Type":"ContainerDied","Data":"11dd0db432534dbcdc1c6f3f059480103fe6476bfa515b01bf1e2aa29bc8aaf2"} Jan 26 11:31:58 crc kubenswrapper[4867]: I0126 11:31:58.316130 4867 generic.go:334] "Generic (PLEG): container finished" podID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerID="73025e556dc27e00af27d38ae94c99d4bddfa40c747229de1fe60bc3f3c2f4c2" exitCode=0 Jan 26 11:31:58 crc kubenswrapper[4867]: I0126 11:31:58.316188 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" event={"ID":"81541a17-1078-4ebe-b702-4d95a4ae8771","Type":"ContainerDied","Data":"73025e556dc27e00af27d38ae94c99d4bddfa40c747229de1fe60bc3f3c2f4c2"} Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.565781 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.717558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-bundle\") pod \"81541a17-1078-4ebe-b702-4d95a4ae8771\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.717974 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-util\") pod \"81541a17-1078-4ebe-b702-4d95a4ae8771\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.718055 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t28c\" (UniqueName: \"kubernetes.io/projected/81541a17-1078-4ebe-b702-4d95a4ae8771-kube-api-access-9t28c\") pod \"81541a17-1078-4ebe-b702-4d95a4ae8771\" (UID: \"81541a17-1078-4ebe-b702-4d95a4ae8771\") " Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.719585 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-bundle" (OuterVolumeSpecName: "bundle") pod "81541a17-1078-4ebe-b702-4d95a4ae8771" (UID: "81541a17-1078-4ebe-b702-4d95a4ae8771"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.719857 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.726462 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81541a17-1078-4ebe-b702-4d95a4ae8771-kube-api-access-9t28c" (OuterVolumeSpecName: "kube-api-access-9t28c") pod "81541a17-1078-4ebe-b702-4d95a4ae8771" (UID: "81541a17-1078-4ebe-b702-4d95a4ae8771"). InnerVolumeSpecName "kube-api-access-9t28c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.734143 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-util" (OuterVolumeSpecName: "util") pod "81541a17-1078-4ebe-b702-4d95a4ae8771" (UID: "81541a17-1078-4ebe-b702-4d95a4ae8771"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.821860 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81541a17-1078-4ebe-b702-4d95a4ae8771-util\") on node \"crc\" DevicePath \"\"" Jan 26 11:31:59 crc kubenswrapper[4867]: I0126 11:31:59.821905 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t28c\" (UniqueName: \"kubernetes.io/projected/81541a17-1078-4ebe-b702-4d95a4ae8771-kube-api-access-9t28c\") on node \"crc\" DevicePath \"\"" Jan 26 11:32:00 crc kubenswrapper[4867]: I0126 11:32:00.335032 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" event={"ID":"81541a17-1078-4ebe-b702-4d95a4ae8771","Type":"ContainerDied","Data":"043048062c735323f9a47a83eee2dacdcbd46a46f60837793e5af245156afd9a"} Jan 26 11:32:00 crc kubenswrapper[4867]: I0126 11:32:00.335091 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd" Jan 26 11:32:00 crc kubenswrapper[4867]: I0126 11:32:00.335119 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043048062c735323f9a47a83eee2dacdcbd46a46f60837793e5af245156afd9a" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.261403 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx"] Jan 26 11:32:07 crc kubenswrapper[4867]: E0126 11:32:07.262658 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerName="pull" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.262678 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerName="pull" Jan 26 11:32:07 crc kubenswrapper[4867]: E0126 11:32:07.262709 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerName="util" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.262720 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerName="util" Jan 26 11:32:07 crc kubenswrapper[4867]: E0126 11:32:07.262731 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerName="extract" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.262738 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerName="extract" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.262861 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="81541a17-1078-4ebe-b702-4d95a4ae8771" containerName="extract" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.263374 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.266623 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-j2jh9" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.329248 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jbxx\" (UniqueName: \"kubernetes.io/projected/3392fcb6-70d9-46f0-954b-81e2cee79a72-kube-api-access-8jbxx\") pod \"openstack-operator-controller-init-74894dff96-wh5tx\" (UID: \"3392fcb6-70d9-46f0-954b-81e2cee79a72\") " pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.369211 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx"] Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.431770 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jbxx\" (UniqueName: \"kubernetes.io/projected/3392fcb6-70d9-46f0-954b-81e2cee79a72-kube-api-access-8jbxx\") pod \"openstack-operator-controller-init-74894dff96-wh5tx\" (UID: \"3392fcb6-70d9-46f0-954b-81e2cee79a72\") " pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.467169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jbxx\" (UniqueName: \"kubernetes.io/projected/3392fcb6-70d9-46f0-954b-81e2cee79a72-kube-api-access-8jbxx\") pod \"openstack-operator-controller-init-74894dff96-wh5tx\" (UID: \"3392fcb6-70d9-46f0-954b-81e2cee79a72\") " pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.589536 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" Jan 26 11:32:07 crc kubenswrapper[4867]: I0126 11:32:07.857906 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx"] Jan 26 11:32:08 crc kubenswrapper[4867]: I0126 11:32:08.392250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" event={"ID":"3392fcb6-70d9-46f0-954b-81e2cee79a72","Type":"ContainerStarted","Data":"d542fd98c166a36411001d9c4e4e03efd633370d5c8df9978415e3659e5a4d2e"} Jan 26 11:32:12 crc kubenswrapper[4867]: I0126 11:32:12.424287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" event={"ID":"3392fcb6-70d9-46f0-954b-81e2cee79a72","Type":"ContainerStarted","Data":"8c5b9827fbe8d4d08a9879184c2613f0eaf29cee244d9c28ff1e44f84d8ffcd4"} Jan 26 11:32:12 crc kubenswrapper[4867]: I0126 11:32:12.425023 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" Jan 26 11:32:12 crc kubenswrapper[4867]: I0126 11:32:12.495078 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" podStartSLOduration=1.302420289 podStartE2EDuration="5.495043642s" podCreationTimestamp="2026-01-26 11:32:07 +0000 UTC" firstStartedPulling="2026-01-26 11:32:07.864916559 +0000 UTC m=+877.563491469" lastFinishedPulling="2026-01-26 11:32:12.057539912 +0000 UTC m=+881.756114822" observedRunningTime="2026-01-26 11:32:12.488768581 +0000 UTC m=+882.187343531" watchObservedRunningTime="2026-01-26 11:32:12.495043642 +0000 UTC m=+882.193618562" Jan 26 11:32:17 crc kubenswrapper[4867]: I0126 11:32:17.593896 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-74894dff96-wh5tx" Jan 26 11:32:36 crc kubenswrapper[4867]: I0126 11:32:36.293981 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:32:36 crc kubenswrapper[4867]: I0126 11:32:36.294761 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.218115 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.219483 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.225100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8np55" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.231236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdhg\" (UniqueName: \"kubernetes.io/projected/34c3c36b-d905-4349-8909-bd15951aca68-kube-api-access-7zdhg\") pod \"barbican-operator-controller-manager-7f86f8796f-rgg4g\" (UID: \"34c3c36b-d905-4349-8909-bd15951aca68\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.239201 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.240276 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.243941 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.244754 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bg2bz" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.258089 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.277235 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.278129 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.283016 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.283878 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.285647 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-2wnm4" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.286324 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-njf7t" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.295433 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.328656 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.336209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdhg\" (UniqueName: \"kubernetes.io/projected/34c3c36b-d905-4349-8909-bd15951aca68-kube-api-access-7zdhg\") pod \"barbican-operator-controller-manager-7f86f8796f-rgg4g\" (UID: \"34c3c36b-d905-4349-8909-bd15951aca68\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.365291 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.366337 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.368713 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4wqxl" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.377302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdhg\" (UniqueName: \"kubernetes.io/projected/34c3c36b-d905-4349-8909-bd15951aca68-kube-api-access-7zdhg\") pod \"barbican-operator-controller-manager-7f86f8796f-rgg4g\" (UID: \"34c3c36b-d905-4349-8909-bd15951aca68\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.383621 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.384851 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.388766 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dsrsh" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.396485 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.403721 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.407002 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-758868c854-chnbm"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.408561 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.411394 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.414855 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tjfvv" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.418360 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.420388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.423944 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ggldz" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.426390 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.427440 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.432475 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gb95f" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.437100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6hp\" (UniqueName: \"kubernetes.io/projected/b1c6af74-51a5-45bb-afed-9b8b19a5c7df-kube-api-access-bq6hp\") pod \"designate-operator-controller-manager-b45d7bf98-8w8hc\" (UID: \"b1c6af74-51a5-45bb-afed-9b8b19a5c7df\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.437164 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98chb\" (UniqueName: \"kubernetes.io/projected/10ae2757-3e84-4ad1-8459-fca684db2964-kube-api-access-98chb\") pod \"cinder-operator-controller-manager-7478f7dbf9-ccp9p\" (UID: \"10ae2757-3e84-4ad1-8459-fca684db2964\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.437217 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdjc\" (UniqueName: \"kubernetes.io/projected/4f33548d-3a14-41f4-8447-feb86b7cf366-kube-api-access-4qdjc\") pod \"glance-operator-controller-manager-78fdd796fd-gthnl\" (UID: \"4f33548d-3a14-41f4-8447-feb86b7cf366\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.452274 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.453185 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.453299 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.458304 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.463339 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-758868c854-chnbm"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.467135 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8ctfk" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.511805 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.538353 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.539906 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.540669 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.541986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzw24\" (UniqueName: \"kubernetes.io/projected/3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3-kube-api-access-tzw24\") pod \"manila-operator-controller-manager-78c6999f6f-5s6fg\" (UID: \"3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.542029 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq6hp\" (UniqueName: \"kubernetes.io/projected/b1c6af74-51a5-45bb-afed-9b8b19a5c7df-kube-api-access-bq6hp\") pod \"designate-operator-controller-manager-b45d7bf98-8w8hc\" (UID: \"b1c6af74-51a5-45bb-afed-9b8b19a5c7df\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.542063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rftvd\" (UniqueName: \"kubernetes.io/projected/073c6f18-4275-4233-8308-39307e2cc0c7-kube-api-access-rftvd\") pod \"heat-operator-controller-manager-594c8c9d5d-gh4fm\" (UID: \"073c6f18-4275-4233-8308-39307e2cc0c7\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.542088 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgv4b\" (UniqueName: \"kubernetes.io/projected/5402225a-cbc7-4b7c-8036-9b8159baee31-kube-api-access-qgv4b\") pod \"horizon-operator-controller-manager-77d5c5b54f-pgqvv\" (UID: \"5402225a-cbc7-4b7c-8036-9b8159baee31\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.542150 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98chb\" (UniqueName: \"kubernetes.io/projected/10ae2757-3e84-4ad1-8459-fca684db2964-kube-api-access-98chb\") pod \"cinder-operator-controller-manager-7478f7dbf9-ccp9p\" (UID: \"10ae2757-3e84-4ad1-8459-fca684db2964\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.542186 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/9da13f82-2fca-4922-8b27-b11d702897ff-kube-api-access-vmc4n\") pod \"keystone-operator-controller-manager-b8b6d4659-tzb4g\" (UID: \"9da13f82-2fca-4922-8b27-b11d702897ff\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.543466 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7rhmz" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.550644 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdjc\" (UniqueName: \"kubernetes.io/projected/4f33548d-3a14-41f4-8447-feb86b7cf366-kube-api-access-4qdjc\") pod \"glance-operator-controller-manager-78fdd796fd-gthnl\" (UID: \"4f33548d-3a14-41f4-8447-feb86b7cf366\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.550704 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjp5\" (UniqueName: \"kubernetes.io/projected/1dce245d-cfd7-440a-9797-2e8c05641673-kube-api-access-8pjp5\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.550732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.550769 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqldv\" (UniqueName: \"kubernetes.io/projected/242c7502-97f2-4ac9-96ba-17b04f96a5b5-kube-api-access-tqldv\") pod \"ironic-operator-controller-manager-598d88d885-fjpln\" (UID: \"242c7502-97f2-4ac9-96ba-17b04f96a5b5\") " pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.624447 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.626282 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98chb\" (UniqueName: \"kubernetes.io/projected/10ae2757-3e84-4ad1-8459-fca684db2964-kube-api-access-98chb\") pod \"cinder-operator-controller-manager-7478f7dbf9-ccp9p\" (UID: \"10ae2757-3e84-4ad1-8459-fca684db2964\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.628005 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.629922 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdjc\" (UniqueName: \"kubernetes.io/projected/4f33548d-3a14-41f4-8447-feb86b7cf366-kube-api-access-4qdjc\") pod \"glance-operator-controller-manager-78fdd796fd-gthnl\" (UID: \"4f33548d-3a14-41f4-8447-feb86b7cf366\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.631703 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq6hp\" (UniqueName: \"kubernetes.io/projected/b1c6af74-51a5-45bb-afed-9b8b19a5c7df-kube-api-access-bq6hp\") pod \"designate-operator-controller-manager-b45d7bf98-8w8hc\" (UID: \"b1c6af74-51a5-45bb-afed-9b8b19a5c7df\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.640173 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-q8tk2" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.650680 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652010 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjp5\" (UniqueName: \"kubernetes.io/projected/1dce245d-cfd7-440a-9797-2e8c05641673-kube-api-access-8pjp5\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652046 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqldv\" (UniqueName: \"kubernetes.io/projected/242c7502-97f2-4ac9-96ba-17b04f96a5b5-kube-api-access-tqldv\") pod \"ironic-operator-controller-manager-598d88d885-fjpln\" (UID: \"242c7502-97f2-4ac9-96ba-17b04f96a5b5\") " pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzw24\" (UniqueName: \"kubernetes.io/projected/3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3-kube-api-access-tzw24\") pod \"manila-operator-controller-manager-78c6999f6f-5s6fg\" (UID: \"3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652152 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rftvd\" (UniqueName: \"kubernetes.io/projected/073c6f18-4275-4233-8308-39307e2cc0c7-kube-api-access-rftvd\") pod \"heat-operator-controller-manager-594c8c9d5d-gh4fm\" (UID: \"073c6f18-4275-4233-8308-39307e2cc0c7\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652178 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgv4b\" (UniqueName: \"kubernetes.io/projected/5402225a-cbc7-4b7c-8036-9b8159baee31-kube-api-access-qgv4b\") pod \"horizon-operator-controller-manager-77d5c5b54f-pgqvv\" (UID: \"5402225a-cbc7-4b7c-8036-9b8159baee31\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/9da13f82-2fca-4922-8b27-b11d702897ff-kube-api-access-vmc4n\") pod \"keystone-operator-controller-manager-b8b6d4659-tzb4g\" (UID: \"9da13f82-2fca-4922-8b27-b11d702897ff\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.652276 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45rd9\" (UniqueName: \"kubernetes.io/projected/2034ae77-372d-473a-b038-83ee4c3720c0-kube-api-access-45rd9\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-khq8w\" (UID: \"2034ae77-372d-473a-b038-83ee4c3720c0\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" Jan 26 11:32:55 crc kubenswrapper[4867]: E0126 11:32:55.652667 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:55 crc kubenswrapper[4867]: E0126 11:32:55.652714 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert podName:1dce245d-cfd7-440a-9797-2e8c05641673 nodeName:}" failed. No retries permitted until 2026-01-26 11:32:56.152693294 +0000 UTC m=+925.851268204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert") pod "infra-operator-controller-manager-758868c854-chnbm" (UID: "1dce245d-cfd7-440a-9797-2e8c05641673") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.694851 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.696208 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.710398 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qz6cx" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.742888 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzw24\" (UniqueName: \"kubernetes.io/projected/3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3-kube-api-access-tzw24\") pod \"manila-operator-controller-manager-78c6999f6f-5s6fg\" (UID: \"3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.757851 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwkfq\" (UniqueName: \"kubernetes.io/projected/c9a978c7-9efb-43dc-830c-31020be6121a-kube-api-access-kwkfq\") pod \"neutron-operator-controller-manager-78d58447c5-wz989\" (UID: \"c9a978c7-9efb-43dc-830c-31020be6121a\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.757898 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45rd9\" (UniqueName: \"kubernetes.io/projected/2034ae77-372d-473a-b038-83ee4c3720c0-kube-api-access-45rd9\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-khq8w\" (UID: \"2034ae77-372d-473a-b038-83ee4c3720c0\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.769095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rftvd\" (UniqueName: \"kubernetes.io/projected/073c6f18-4275-4233-8308-39307e2cc0c7-kube-api-access-rftvd\") pod \"heat-operator-controller-manager-594c8c9d5d-gh4fm\" (UID: \"073c6f18-4275-4233-8308-39307e2cc0c7\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.795725 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.837811 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.838247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqldv\" (UniqueName: \"kubernetes.io/projected/242c7502-97f2-4ac9-96ba-17b04f96a5b5-kube-api-access-tqldv\") pod \"ironic-operator-controller-manager-598d88d885-fjpln\" (UID: \"242c7502-97f2-4ac9-96ba-17b04f96a5b5\") " pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.839635 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgv4b\" (UniqueName: \"kubernetes.io/projected/5402225a-cbc7-4b7c-8036-9b8159baee31-kube-api-access-qgv4b\") pod \"horizon-operator-controller-manager-77d5c5b54f-pgqvv\" (UID: \"5402225a-cbc7-4b7c-8036-9b8159baee31\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.846784 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjp5\" (UniqueName: \"kubernetes.io/projected/1dce245d-cfd7-440a-9797-2e8c05641673-kube-api-access-8pjp5\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.859891 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.862711 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwkfq\" (UniqueName: \"kubernetes.io/projected/c9a978c7-9efb-43dc-830c-31020be6121a-kube-api-access-kwkfq\") pod \"neutron-operator-controller-manager-78d58447c5-wz989\" (UID: \"c9a978c7-9efb-43dc-830c-31020be6121a\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.862941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzq7\" (UniqueName: \"kubernetes.io/projected/de2f9a68-7384-47b5-a16d-da28e04440de-kube-api-access-zjzq7\") pod \"nova-operator-controller-manager-7bdb645866-v4pfk\" (UID: \"de2f9a68-7384-47b5-a16d-da28e04440de\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.847971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45rd9\" (UniqueName: \"kubernetes.io/projected/2034ae77-372d-473a-b038-83ee4c3720c0-kube-api-access-45rd9\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-khq8w\" (UID: \"2034ae77-372d-473a-b038-83ee4c3720c0\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.875131 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmc4n\" (UniqueName: \"kubernetes.io/projected/9da13f82-2fca-4922-8b27-b11d702897ff-kube-api-access-vmc4n\") pod \"keystone-operator-controller-manager-b8b6d4659-tzb4g\" (UID: \"9da13f82-2fca-4922-8b27-b11d702897ff\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.848257 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.889568 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ptxnz" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.903604 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk"] Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.964616 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.965914 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.987786 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grwfv\" (UniqueName: \"kubernetes.io/projected/99737677-080c-4f1a-aa91-e5162fe5f25d-kube-api-access-grwfv\") pod \"octavia-operator-controller-manager-5f4cd88d46-z7djp\" (UID: \"99737677-080c-4f1a-aa91-e5162fe5f25d\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.987944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjzq7\" (UniqueName: \"kubernetes.io/projected/de2f9a68-7384-47b5-a16d-da28e04440de-kube-api-access-zjzq7\") pod \"nova-operator-controller-manager-7bdb645866-v4pfk\" (UID: \"de2f9a68-7384-47b5-a16d-da28e04440de\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.990136 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwkfq\" (UniqueName: \"kubernetes.io/projected/c9a978c7-9efb-43dc-830c-31020be6121a-kube-api-access-kwkfq\") pod \"neutron-operator-controller-manager-78d58447c5-wz989\" (UID: \"c9a978c7-9efb-43dc-830c-31020be6121a\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" Jan 26 11:32:55 crc kubenswrapper[4867]: I0126 11:32:55.991766 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.046946 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.048392 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.049032 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.071961 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjzq7\" (UniqueName: \"kubernetes.io/projected/de2f9a68-7384-47b5-a16d-da28e04440de-kube-api-access-zjzq7\") pod \"nova-operator-controller-manager-7bdb645866-v4pfk\" (UID: \"de2f9a68-7384-47b5-a16d-da28e04440de\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.103331 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.103796 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.104879 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.106865 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grwfv\" (UniqueName: \"kubernetes.io/projected/99737677-080c-4f1a-aa91-e5162fe5f25d-kube-api-access-grwfv\") pod \"octavia-operator-controller-manager-5f4cd88d46-z7djp\" (UID: \"99737677-080c-4f1a-aa91-e5162fe5f25d\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.117352 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.117869 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q5fp4" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.117982 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.119234 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.121094 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jq657" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.124268 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.133714 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.134883 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.134982 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.138562 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.139175 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zzqdf" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.142507 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.152811 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.162517 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.164050 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grwfv\" (UniqueName: \"kubernetes.io/projected/99737677-080c-4f1a-aa91-e5162fe5f25d-kube-api-access-grwfv\") pod \"octavia-operator-controller-manager-5f4cd88d46-z7djp\" (UID: \"99737677-080c-4f1a-aa91-e5162fe5f25d\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.167577 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.167711 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.172577 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jws9h" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.189535 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.200131 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.205492 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.208152 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.208322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sg96\" (UniqueName: \"kubernetes.io/projected/829c6c7e-cc19-4f6d-a350-dea6f26f3436-kube-api-access-2sg96\") pod \"placement-operator-controller-manager-79d5ccc684-jjlnx\" (UID: \"829c6c7e-cc19-4f6d-a350-dea6f26f3436\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.208360 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.208445 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.208486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4xz\" (UniqueName: \"kubernetes.io/projected/ee79b4ff-ed5f-4660-9d36-2fd0c1840f84-kube-api-access-5s4xz\") pod \"swift-operator-controller-manager-547cbdb99f-r7pf7\" (UID: \"ee79b4ff-ed5f-4660-9d36-2fd0c1840f84\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.208526 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrj5\" (UniqueName: \"kubernetes.io/projected/bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975-kube-api-access-sqrj5\") pod \"ovn-operator-controller-manager-6f75f45d54-rsv5q\" (UID: \"bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.208588 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwvh\" (UniqueName: \"kubernetes.io/projected/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-kube-api-access-fmwvh\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.208863 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.208952 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert podName:1dce245d-cfd7-440a-9797-2e8c05641673 nodeName:}" failed. No retries permitted until 2026-01-26 11:32:57.208910552 +0000 UTC m=+926.907485462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert") pod "infra-operator-controller-manager-758868c854-chnbm" (UID: "1dce245d-cfd7-440a-9797-2e8c05641673") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.222160 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.229764 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.249255 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.249824 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4kg4l" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.250431 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4j8lp" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.263326 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-df52v"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.264556 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.275079 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-x57nm" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.285414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.285453 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.300710 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-df52v"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.329411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sg96\" (UniqueName: \"kubernetes.io/projected/829c6c7e-cc19-4f6d-a350-dea6f26f3436-kube-api-access-2sg96\") pod \"placement-operator-controller-manager-79d5ccc684-jjlnx\" (UID: \"829c6c7e-cc19-4f6d-a350-dea6f26f3436\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.329512 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.329699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s4xz\" (UniqueName: \"kubernetes.io/projected/ee79b4ff-ed5f-4660-9d36-2fd0c1840f84-kube-api-access-5s4xz\") pod \"swift-operator-controller-manager-547cbdb99f-r7pf7\" (UID: \"ee79b4ff-ed5f-4660-9d36-2fd0c1840f84\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.329791 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrj5\" (UniqueName: \"kubernetes.io/projected/bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975-kube-api-access-sqrj5\") pod \"ovn-operator-controller-manager-6f75f45d54-rsv5q\" (UID: \"bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.329847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjddt\" (UniqueName: \"kubernetes.io/projected/4009a85d-3728-420e-b7db-70f8b41587ff-kube-api-access-fjddt\") pod \"test-operator-controller-manager-69797bbcbd-n6zwx\" (UID: \"4009a85d-3728-420e-b7db-70f8b41587ff\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.329899 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.329921 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jf2\" (UniqueName: \"kubernetes.io/projected/10f19670-4fbf-42ee-b54c-5317af0b0c00-kube-api-access-k2jf2\") pod \"telemetry-operator-controller-manager-85cd9769bb-c7klk\" (UID: \"10f19670-4fbf-42ee-b54c-5317af0b0c00\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.329973 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert podName:b2b3db26-bd1e-4178-ad15-3fb849d16a6c nodeName:}" failed. No retries permitted until 2026-01-26 11:32:56.82995297 +0000 UTC m=+926.528527880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" (UID: "b2b3db26-bd1e-4178-ad15-3fb849d16a6c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.330008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5wc\" (UniqueName: \"kubernetes.io/projected/799c2d45-a054-4971-a87e-ad3b620cb2c5-kube-api-access-gk5wc\") pod \"watcher-operator-controller-manager-564965969-df52v\" (UID: \"799c2d45-a054-4971-a87e-ad3b620cb2c5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.330061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwvh\" (UniqueName: \"kubernetes.io/projected/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-kube-api-access-fmwvh\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.373393 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrj5\" (UniqueName: \"kubernetes.io/projected/bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975-kube-api-access-sqrj5\") pod \"ovn-operator-controller-manager-6f75f45d54-rsv5q\" (UID: \"bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.373712 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sg96\" (UniqueName: \"kubernetes.io/projected/829c6c7e-cc19-4f6d-a350-dea6f26f3436-kube-api-access-2sg96\") pod \"placement-operator-controller-manager-79d5ccc684-jjlnx\" (UID: \"829c6c7e-cc19-4f6d-a350-dea6f26f3436\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.375663 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s4xz\" (UniqueName: \"kubernetes.io/projected/ee79b4ff-ed5f-4660-9d36-2fd0c1840f84-kube-api-access-5s4xz\") pod \"swift-operator-controller-manager-547cbdb99f-r7pf7\" (UID: \"ee79b4ff-ed5f-4660-9d36-2fd0c1840f84\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.380462 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwvh\" (UniqueName: \"kubernetes.io/projected/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-kube-api-access-fmwvh\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.418831 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.419844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.421912 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xkrkl" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.432048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.432346 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.432470 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.449629 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjddt\" (UniqueName: \"kubernetes.io/projected/4009a85d-3728-420e-b7db-70f8b41587ff-kube-api-access-fjddt\") pod \"test-operator-controller-manager-69797bbcbd-n6zwx\" (UID: \"4009a85d-3728-420e-b7db-70f8b41587ff\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.449723 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2jf2\" (UniqueName: \"kubernetes.io/projected/10f19670-4fbf-42ee-b54c-5317af0b0c00-kube-api-access-k2jf2\") pod \"telemetry-operator-controller-manager-85cd9769bb-c7klk\" (UID: \"10f19670-4fbf-42ee-b54c-5317af0b0c00\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.449777 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5wc\" (UniqueName: \"kubernetes.io/projected/799c2d45-a054-4971-a87e-ad3b620cb2c5-kube-api-access-gk5wc\") pod \"watcher-operator-controller-manager-564965969-df52v\" (UID: \"799c2d45-a054-4971-a87e-ad3b620cb2c5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.468148 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.474892 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.483620 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.483787 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.487028 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-t4xkr" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.496613 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjddt\" (UniqueName: \"kubernetes.io/projected/4009a85d-3728-420e-b7db-70f8b41587ff-kube-api-access-fjddt\") pod \"test-operator-controller-manager-69797bbcbd-n6zwx\" (UID: \"4009a85d-3728-420e-b7db-70f8b41587ff\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.537668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2jf2\" (UniqueName: \"kubernetes.io/projected/10f19670-4fbf-42ee-b54c-5317af0b0c00-kube-api-access-k2jf2\") pod \"telemetry-operator-controller-manager-85cd9769bb-c7klk\" (UID: \"10f19670-4fbf-42ee-b54c-5317af0b0c00\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.538960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5wc\" (UniqueName: \"kubernetes.io/projected/799c2d45-a054-4971-a87e-ad3b620cb2c5-kube-api-access-gk5wc\") pod \"watcher-operator-controller-manager-564965969-df52v\" (UID: \"799c2d45-a054-4971-a87e-ad3b620cb2c5\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.546561 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.551580 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.551692 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9sfk\" (UniqueName: \"kubernetes.io/projected/ccccb13a-d387-4515-83c6-ea24a070a12e-kube-api-access-x9sfk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lcn9l\" (UID: \"ccccb13a-d387-4515-83c6-ea24a070a12e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.551804 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mqrt\" (UniqueName: \"kubernetes.io/projected/dc30069e-52ed-46a5-9dc9-4558c856149e-kube-api-access-8mqrt\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.551883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.569615 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.616430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.624129 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.662102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.662341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9sfk\" (UniqueName: \"kubernetes.io/projected/ccccb13a-d387-4515-83c6-ea24a070a12e-kube-api-access-x9sfk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lcn9l\" (UID: \"ccccb13a-d387-4515-83c6-ea24a070a12e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.662587 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mqrt\" (UniqueName: \"kubernetes.io/projected/dc30069e-52ed-46a5-9dc9-4558c856149e-kube-api-access-8mqrt\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.662679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.664199 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.664298 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:32:57.164277383 +0000 UTC m=+926.862852293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.665116 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.666052 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:32:57.165953761 +0000 UTC m=+926.864528671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "metrics-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.666328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.679683 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.680073 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p"] Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.689583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9sfk\" (UniqueName: \"kubernetes.io/projected/ccccb13a-d387-4515-83c6-ea24a070a12e-kube-api-access-x9sfk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lcn9l\" (UID: \"ccccb13a-d387-4515-83c6-ea24a070a12e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.700651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mqrt\" (UniqueName: \"kubernetes.io/projected/dc30069e-52ed-46a5-9dc9-4558c856149e-kube-api-access-8mqrt\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:56 crc kubenswrapper[4867]: W0126 11:32:56.714404 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eb62ea0_8291_49ec_aa8d_cb40ba93ecc3.slice/crio-9b225b72f4aab1f1d4fab01e896208f0ad94ad986395691b8d55eae1320eaa96 WatchSource:0}: Error finding container 9b225b72f4aab1f1d4fab01e896208f0ad94ad986395691b8d55eae1320eaa96: Status 404 returned error can't find the container with id 9b225b72f4aab1f1d4fab01e896208f0ad94ad986395691b8d55eae1320eaa96 Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.825046 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.867635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.867861 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: E0126 11:32:56.868956 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert podName:b2b3db26-bd1e-4178-ad15-3fb849d16a6c nodeName:}" failed. No retries permitted until 2026-01-26 11:32:57.868927817 +0000 UTC m=+927.567502727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" (UID: "b2b3db26-bd1e-4178-ad15-3fb849d16a6c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.894793 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" event={"ID":"3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3","Type":"ContainerStarted","Data":"9b225b72f4aab1f1d4fab01e896208f0ad94ad986395691b8d55eae1320eaa96"} Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.900282 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" event={"ID":"34c3c36b-d905-4349-8909-bd15951aca68","Type":"ContainerStarted","Data":"9b1a731a437d8740e351dc3d911d20b95133a9fc3d60daa4dcfb2ebb0f971191"} Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.906912 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" event={"ID":"10ae2757-3e84-4ad1-8459-fca684db2964","Type":"ContainerStarted","Data":"aa194a14d03892214eb96abd26e02d445572f357a85646ecf76e67bb69ac77e1"} Jan 26 11:32:56 crc kubenswrapper[4867]: I0126 11:32:56.945074 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.055787 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.137801 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.172857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.172954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.173128 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.173233 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:32:58.173196864 +0000 UTC m=+927.871771774 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "webhook-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.173138 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.173279 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:32:58.173271056 +0000 UTC m=+927.871845966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "metrics-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.274826 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.275194 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.275275 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert podName:1dce245d-cfd7-440a-9797-2e8c05641673 nodeName:}" failed. No retries permitted until 2026-01-26 11:32:59.275256828 +0000 UTC m=+928.973831738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert") pod "infra-operator-controller-manager-758868c854-chnbm" (UID: "1dce245d-cfd7-440a-9797-2e8c05641673") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.526260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv"] Jan 26 11:32:57 crc kubenswrapper[4867]: W0126 11:32:57.528821 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5402225a_cbc7_4b7c_8036_9b8159baee31.slice/crio-104703e0a69ea883057fc25c6842a3204f9ef75575118ea5a17f75581722ff9b WatchSource:0}: Error finding container 104703e0a69ea883057fc25c6842a3204f9ef75575118ea5a17f75581722ff9b: Status 404 returned error can't find the container with id 104703e0a69ea883057fc25c6842a3204f9ef75575118ea5a17f75581722ff9b Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.564141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc"] Jan 26 11:32:57 crc kubenswrapper[4867]: W0126 11:32:57.569615 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1c6af74_51a5_45bb_afed_9b8b19a5c7df.slice/crio-446cb65239b177e63ef6595bea07b1904ddbd4cc636e886635226fe7e5c53ba0 WatchSource:0}: Error finding container 446cb65239b177e63ef6595bea07b1904ddbd4cc636e886635226fe7e5c53ba0: Status 404 returned error can't find the container with id 446cb65239b177e63ef6595bea07b1904ddbd4cc636e886635226fe7e5c53ba0 Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.758924 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.772828 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.789723 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w"] Jan 26 11:32:57 crc kubenswrapper[4867]: W0126 11:32:57.808439 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da13f82_2fca_4922_8b27_b11d702897ff.slice/crio-13e1ea0b764f8bd2cfac58deed51d741442599aa16bd5ccff7d6167ae90f8c39 WatchSource:0}: Error finding container 13e1ea0b764f8bd2cfac58deed51d741442599aa16bd5ccff7d6167ae90f8c39: Status 404 returned error can't find the container with id 13e1ea0b764f8bd2cfac58deed51d741442599aa16bd5ccff7d6167ae90f8c39 Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.818068 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.829178 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l"] Jan 26 11:32:57 crc kubenswrapper[4867]: W0126 11:32:57.830110 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde2f9a68_7384_47b5_a16d_da28e04440de.slice/crio-4030d855245eb1568513a4f073f4c7937758036397de6e423db66410fac5892f WatchSource:0}: Error finding container 4030d855245eb1568513a4f073f4c7937758036397de6e423db66410fac5892f: Status 404 returned error can't find the container with id 4030d855245eb1568513a4f073f4c7937758036397de6e423db66410fac5892f Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.837244 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.843688 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.846716 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk"] Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.884148 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.884379 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: E0126 11:32:57.884444 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert podName:b2b3db26-bd1e-4178-ad15-3fb849d16a6c nodeName:}" failed. No retries permitted until 2026-01-26 11:32:59.884424669 +0000 UTC m=+929.582999579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" (UID: "b2b3db26-bd1e-4178-ad15-3fb849d16a6c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.928840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" event={"ID":"9da13f82-2fca-4922-8b27-b11d702897ff","Type":"ContainerStarted","Data":"13e1ea0b764f8bd2cfac58deed51d741442599aa16bd5ccff7d6167ae90f8c39"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.930034 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" event={"ID":"ee79b4ff-ed5f-4660-9d36-2fd0c1840f84","Type":"ContainerStarted","Data":"917b94175af55a74ef344f2d0863185b8e26bcbf53df5f3fd7884a3dcc66c677"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.931776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" event={"ID":"2034ae77-372d-473a-b038-83ee4c3720c0","Type":"ContainerStarted","Data":"e5b02f0d15c1afdd4b028d1f92a79052fef1e228c11f47ebf5abaf00149aa072"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.933070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" event={"ID":"b1c6af74-51a5-45bb-afed-9b8b19a5c7df","Type":"ContainerStarted","Data":"446cb65239b177e63ef6595bea07b1904ddbd4cc636e886635226fe7e5c53ba0"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.933985 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" event={"ID":"073c6f18-4275-4233-8308-39307e2cc0c7","Type":"ContainerStarted","Data":"5d1ab3c72e05c604d3911ec6490335de47ab6ae1173bf074c73209d2bcf9d8af"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.934975 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" event={"ID":"ccccb13a-d387-4515-83c6-ea24a070a12e","Type":"ContainerStarted","Data":"f067b696e5d50c3940afe77bfccdea596eee4358dea38191cd254fd99e5196e2"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.936027 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" event={"ID":"99737677-080c-4f1a-aa91-e5162fe5f25d","Type":"ContainerStarted","Data":"da8371d2b81d2fd39d1763f259aae098e44ab1847d38a02448a9659e0fec16bf"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.936873 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" event={"ID":"5402225a-cbc7-4b7c-8036-9b8159baee31","Type":"ContainerStarted","Data":"104703e0a69ea883057fc25c6842a3204f9ef75575118ea5a17f75581722ff9b"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.938560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" event={"ID":"de2f9a68-7384-47b5-a16d-da28e04440de","Type":"ContainerStarted","Data":"4030d855245eb1568513a4f073f4c7937758036397de6e423db66410fac5892f"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.939751 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" event={"ID":"4f33548d-3a14-41f4-8447-feb86b7cf366","Type":"ContainerStarted","Data":"936f0167f94f09efe3aa9dd9252479dbbf767a578f8a62aeedbe89ccb86f30b9"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.941620 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" event={"ID":"242c7502-97f2-4ac9-96ba-17b04f96a5b5","Type":"ContainerStarted","Data":"6526af1abeb3f94121a0090ff9b689f86780bf71eea5bfb911b2b22b94c944a8"} Jan 26 11:32:57 crc kubenswrapper[4867]: I0126 11:32:57.942613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" event={"ID":"c9a978c7-9efb-43dc-830c-31020be6121a","Type":"ContainerStarted","Data":"5a629aa55e7085dc290fbdc6a3c4efbc1ce025b5e726ddde9e538b1acecd2091"} Jan 26 11:32:58 crc kubenswrapper[4867]: I0126 11:32:58.012366 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-df52v"] Jan 26 11:32:58 crc kubenswrapper[4867]: I0126 11:32:58.043101 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx"] Jan 26 11:32:58 crc kubenswrapper[4867]: I0126 11:32:58.055885 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx"] Jan 26 11:32:58 crc kubenswrapper[4867]: I0126 11:32:58.066816 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q"] Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.071587 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqrj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-rsv5q_openstack-operators(bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.072776 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" podUID="bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975" Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.076764 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjddt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-n6zwx_openstack-operators(4009a85d-3728-420e-b7db-70f8b41587ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.079328 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" podUID="4009a85d-3728-420e-b7db-70f8b41587ff" Jan 26 11:32:58 crc kubenswrapper[4867]: I0126 11:32:58.099582 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk"] Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.105725 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2jf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-c7klk_openstack-operators(10f19670-4fbf-42ee-b54c-5317af0b0c00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.106932 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" podUID="10f19670-4fbf-42ee-b54c-5317af0b0c00" Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.190818 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:32:58 crc kubenswrapper[4867]: I0126 11:32:58.191705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:58 crc kubenswrapper[4867]: I0126 11:32:58.191812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.191904 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.191950 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:00.191929301 +0000 UTC m=+929.890504211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "metrics-server-cert" not found Jan 26 11:32:58 crc kubenswrapper[4867]: E0126 11:32:58.191971 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:00.191963032 +0000 UTC m=+929.890537942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "webhook-server-cert" not found Jan 26 11:32:59 crc kubenswrapper[4867]: I0126 11:32:59.034982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" event={"ID":"829c6c7e-cc19-4f6d-a350-dea6f26f3436","Type":"ContainerStarted","Data":"77fd7a594089ba965a241a24d874909501897c2bcb949293c377ad5d88adf269"} Jan 26 11:32:59 crc kubenswrapper[4867]: I0126 11:32:59.038182 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" event={"ID":"bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975","Type":"ContainerStarted","Data":"faf5e0367f57549fc50eb9c18a7bd2f472d08d164ca5c45b0a674342ac728479"} Jan 26 11:32:59 crc kubenswrapper[4867]: I0126 11:32:59.042213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" event={"ID":"10f19670-4fbf-42ee-b54c-5317af0b0c00","Type":"ContainerStarted","Data":"e262a797df6f2aa9d95026bf99f96d1a3f9bda6c81e3ce6049fc206246bcea47"} Jan 26 11:32:59 crc kubenswrapper[4867]: E0126 11:32:59.044043 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" podUID="10f19670-4fbf-42ee-b54c-5317af0b0c00" Jan 26 11:32:59 crc kubenswrapper[4867]: E0126 11:32:59.044183 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" podUID="bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975" Jan 26 11:32:59 crc kubenswrapper[4867]: I0126 11:32:59.045499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" event={"ID":"4009a85d-3728-420e-b7db-70f8b41587ff","Type":"ContainerStarted","Data":"0232f9cf38e38aa3f3fcf92b1605de525e00e98ab802415a9d001209c44393c7"} Jan 26 11:32:59 crc kubenswrapper[4867]: E0126 11:32:59.047193 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" podUID="4009a85d-3728-420e-b7db-70f8b41587ff" Jan 26 11:32:59 crc kubenswrapper[4867]: I0126 11:32:59.048817 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" event={"ID":"799c2d45-a054-4971-a87e-ad3b620cb2c5","Type":"ContainerStarted","Data":"5d98cb613ea0f3a5ffa17ba12f8c97713dd646e160404ae685ec81c8b5ff6316"} Jan 26 11:32:59 crc kubenswrapper[4867]: I0126 11:32:59.333714 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:32:59 crc kubenswrapper[4867]: E0126 11:32:59.334127 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:59 crc kubenswrapper[4867]: E0126 11:32:59.334236 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert podName:1dce245d-cfd7-440a-9797-2e8c05641673 nodeName:}" failed. No retries permitted until 2026-01-26 11:33:03.334192279 +0000 UTC m=+933.032767189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert") pod "infra-operator-controller-manager-758868c854-chnbm" (UID: "1dce245d-cfd7-440a-9797-2e8c05641673") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:32:59 crc kubenswrapper[4867]: E0126 11:32:59.955914 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:59 crc kubenswrapper[4867]: E0126 11:32:59.956345 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert podName:b2b3db26-bd1e-4178-ad15-3fb849d16a6c nodeName:}" failed. No retries permitted until 2026-01-26 11:33:03.956319138 +0000 UTC m=+933.654894048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" (UID: "b2b3db26-bd1e-4178-ad15-3fb849d16a6c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:32:59 crc kubenswrapper[4867]: I0126 11:32:59.956036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:33:00 crc kubenswrapper[4867]: E0126 11:33:00.080423 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" podUID="10f19670-4fbf-42ee-b54c-5317af0b0c00" Jan 26 11:33:00 crc kubenswrapper[4867]: E0126 11:33:00.082019 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" podUID="bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975" Jan 26 11:33:00 crc kubenswrapper[4867]: E0126 11:33:00.082159 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" podUID="4009a85d-3728-420e-b7db-70f8b41587ff" Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.261900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.262045 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:00 crc kubenswrapper[4867]: E0126 11:33:00.262249 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:33:00 crc kubenswrapper[4867]: E0126 11:33:00.262286 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:33:00 crc kubenswrapper[4867]: E0126 11:33:00.262317 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:04.262298335 +0000 UTC m=+933.960873245 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "webhook-server-cert" not found Jan 26 11:33:00 crc kubenswrapper[4867]: E0126 11:33:00.262420 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:04.262395778 +0000 UTC m=+933.960970688 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "metrics-server-cert" not found Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.880205 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgkn2"] Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.894254 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.917263 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgkn2"] Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.978338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rntn6\" (UniqueName: \"kubernetes.io/projected/de1a4ed9-07d2-4d13-9680-23d361cfff3f-kube-api-access-rntn6\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.978494 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-catalog-content\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:00 crc kubenswrapper[4867]: I0126 11:33:00.978525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-utilities\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:01 crc kubenswrapper[4867]: I0126 11:33:01.087452 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rntn6\" (UniqueName: \"kubernetes.io/projected/de1a4ed9-07d2-4d13-9680-23d361cfff3f-kube-api-access-rntn6\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:01 crc kubenswrapper[4867]: I0126 11:33:01.089174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-catalog-content\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:01 crc kubenswrapper[4867]: I0126 11:33:01.089245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-utilities\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:01 crc kubenswrapper[4867]: I0126 11:33:01.089967 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-utilities\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:01 crc kubenswrapper[4867]: I0126 11:33:01.090328 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-catalog-content\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:01 crc kubenswrapper[4867]: I0126 11:33:01.120595 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rntn6\" (UniqueName: \"kubernetes.io/projected/de1a4ed9-07d2-4d13-9680-23d361cfff3f-kube-api-access-rntn6\") pod \"community-operators-cgkn2\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:01 crc kubenswrapper[4867]: I0126 11:33:01.234752 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:03 crc kubenswrapper[4867]: I0126 11:33:03.338676 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:33:03 crc kubenswrapper[4867]: E0126 11:33:03.338886 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:33:03 crc kubenswrapper[4867]: E0126 11:33:03.339247 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert podName:1dce245d-cfd7-440a-9797-2e8c05641673 nodeName:}" failed. No retries permitted until 2026-01-26 11:33:11.339203622 +0000 UTC m=+941.037778532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert") pod "infra-operator-controller-manager-758868c854-chnbm" (UID: "1dce245d-cfd7-440a-9797-2e8c05641673") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:33:04 crc kubenswrapper[4867]: I0126 11:33:04.056712 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:33:04 crc kubenswrapper[4867]: E0126 11:33:04.056964 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:33:04 crc kubenswrapper[4867]: E0126 11:33:04.057356 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert podName:b2b3db26-bd1e-4178-ad15-3fb849d16a6c nodeName:}" failed. No retries permitted until 2026-01-26 11:33:12.057332249 +0000 UTC m=+941.755907159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" (UID: "b2b3db26-bd1e-4178-ad15-3fb849d16a6c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:33:04 crc kubenswrapper[4867]: I0126 11:33:04.264384 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:04 crc kubenswrapper[4867]: I0126 11:33:04.264509 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:04 crc kubenswrapper[4867]: E0126 11:33:04.264585 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:33:04 crc kubenswrapper[4867]: E0126 11:33:04.264714 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:12.264692212 +0000 UTC m=+941.963267122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "metrics-server-cert" not found Jan 26 11:33:04 crc kubenswrapper[4867]: E0126 11:33:04.264741 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:33:04 crc kubenswrapper[4867]: E0126 11:33:04.264881 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:12.264852687 +0000 UTC m=+941.963427597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "webhook-server-cert" not found Jan 26 11:33:06 crc kubenswrapper[4867]: I0126 11:33:06.293848 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:33:06 crc kubenswrapper[4867]: I0126 11:33:06.294433 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:33:11 crc kubenswrapper[4867]: I0126 11:33:11.421015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:33:11 crc kubenswrapper[4867]: E0126 11:33:11.421603 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 11:33:11 crc kubenswrapper[4867]: E0126 11:33:11.421664 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert podName:1dce245d-cfd7-440a-9797-2e8c05641673 nodeName:}" failed. No retries permitted until 2026-01-26 11:33:27.421646618 +0000 UTC m=+957.120221528 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert") pod "infra-operator-controller-manager-758868c854-chnbm" (UID: "1dce245d-cfd7-440a-9797-2e8c05641673") : secret "infra-operator-webhook-server-cert" not found Jan 26 11:33:12 crc kubenswrapper[4867]: I0126 11:33:12.132052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:33:12 crc kubenswrapper[4867]: E0126 11:33:12.132277 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:33:12 crc kubenswrapper[4867]: E0126 11:33:12.132693 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert podName:b2b3db26-bd1e-4178-ad15-3fb849d16a6c nodeName:}" failed. No retries permitted until 2026-01-26 11:33:28.132669219 +0000 UTC m=+957.831244119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" (UID: "b2b3db26-bd1e-4178-ad15-3fb849d16a6c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 11:33:12 crc kubenswrapper[4867]: I0126 11:33:12.335925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:12 crc kubenswrapper[4867]: I0126 11:33:12.336038 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:12 crc kubenswrapper[4867]: E0126 11:33:12.336252 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 11:33:12 crc kubenswrapper[4867]: E0126 11:33:12.336324 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:28.336303982 +0000 UTC m=+958.034878892 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "metrics-server-cert" not found Jan 26 11:33:12 crc kubenswrapper[4867]: E0126 11:33:12.336428 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 11:33:12 crc kubenswrapper[4867]: E0126 11:33:12.336489 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs podName:dc30069e-52ed-46a5-9dc9-4558c856149e nodeName:}" failed. No retries permitted until 2026-01-26 11:33:28.336475057 +0000 UTC m=+958.035049967 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs") pod "openstack-operator-controller-manager-7d65646bb4-6hkx8" (UID: "dc30069e-52ed-46a5-9dc9-4558c856149e") : secret "webhook-server-cert" not found Jan 26 11:33:17 crc kubenswrapper[4867]: E0126 11:33:17.798613 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 11:33:17 crc kubenswrapper[4867]: E0126 11:33:17.799959 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgv4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-pgqvv_openstack-operators(5402225a-cbc7-4b7c-8036-9b8159baee31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:17 crc kubenswrapper[4867]: E0126 11:33:17.801257 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" podUID="5402225a-cbc7-4b7c-8036-9b8159baee31" Jan 26 11:33:18 crc kubenswrapper[4867]: E0126 11:33:18.277087 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" podUID="5402225a-cbc7-4b7c-8036-9b8159baee31" Jan 26 11:33:18 crc kubenswrapper[4867]: E0126 11:33:18.679669 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 26 11:33:18 crc kubenswrapper[4867]: E0126 11:33:18.680040 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5s4xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-r7pf7_openstack-operators(ee79b4ff-ed5f-4660-9d36-2fd0c1840f84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:18 crc kubenswrapper[4867]: E0126 11:33:18.681300 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" podUID="ee79b4ff-ed5f-4660-9d36-2fd0c1840f84" Jan 26 11:33:19 crc kubenswrapper[4867]: E0126 11:33:19.283458 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" podUID="ee79b4ff-ed5f-4660-9d36-2fd0c1840f84" Jan 26 11:33:20 crc kubenswrapper[4867]: E0126 11:33:20.133295 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 11:33:20 crc kubenswrapper[4867]: E0126 11:33:20.133581 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwkfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-wz989_openstack-operators(c9a978c7-9efb-43dc-830c-31020be6121a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:20 crc kubenswrapper[4867]: E0126 11:33:20.135347 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" podUID="c9a978c7-9efb-43dc-830c-31020be6121a" Jan 26 11:33:20 crc kubenswrapper[4867]: E0126 11:33:20.289971 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" podUID="c9a978c7-9efb-43dc-830c-31020be6121a" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.395229 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8z5vj"] Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.397366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.415512 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z5vj"] Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.472123 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-utilities\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.472190 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrrf\" (UniqueName: \"kubernetes.io/projected/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-kube-api-access-fwrrf\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.472254 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-catalog-content\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.578308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-utilities\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.578373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrrf\" (UniqueName: \"kubernetes.io/projected/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-kube-api-access-fwrrf\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.578449 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-catalog-content\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.579286 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-utilities\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.579620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-catalog-content\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.608711 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrrf\" (UniqueName: \"kubernetes.io/projected/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-kube-api-access-fwrrf\") pod \"certified-operators-8z5vj\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:20 crc kubenswrapper[4867]: I0126 11:33:20.729489 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:21 crc kubenswrapper[4867]: E0126 11:33:21.053777 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 26 11:33:21 crc kubenswrapper[4867]: E0126 11:33:21.054020 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gk5wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-df52v_openstack-operators(799c2d45-a054-4971-a87e-ad3b620cb2c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:21 crc kubenswrapper[4867]: E0126 11:33:21.055392 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" podUID="799c2d45-a054-4971-a87e-ad3b620cb2c5" Jan 26 11:33:21 crc kubenswrapper[4867]: E0126 11:33:21.298573 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" podUID="799c2d45-a054-4971-a87e-ad3b620cb2c5" Jan 26 11:33:21 crc kubenswrapper[4867]: E0126 11:33:21.973513 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 11:33:21 crc kubenswrapper[4867]: E0126 11:33:21.973842 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rftvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-gh4fm_openstack-operators(073c6f18-4275-4233-8308-39307e2cc0c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:21 crc kubenswrapper[4867]: E0126 11:33:21.975421 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" podUID="073c6f18-4275-4233-8308-39307e2cc0c7" Jan 26 11:33:22 crc kubenswrapper[4867]: E0126 11:33:22.307514 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" podUID="073c6f18-4275-4233-8308-39307e2cc0c7" Jan 26 11:33:25 crc kubenswrapper[4867]: E0126 11:33:25.311790 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 11:33:25 crc kubenswrapper[4867]: E0126 11:33:25.312562 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tzw24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-5s6fg_openstack-operators(3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:25 crc kubenswrapper[4867]: E0126 11:33:25.313839 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" podUID="3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3" Jan 26 11:33:25 crc kubenswrapper[4867]: E0126 11:33:25.332007 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" podUID="3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3" Jan 26 11:33:27 crc kubenswrapper[4867]: I0126 11:33:27.499822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:33:27 crc kubenswrapper[4867]: I0126 11:33:27.515204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1dce245d-cfd7-440a-9797-2e8c05641673-cert\") pod \"infra-operator-controller-manager-758868c854-chnbm\" (UID: \"1dce245d-cfd7-440a-9797-2e8c05641673\") " pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:33:27 crc kubenswrapper[4867]: I0126 11:33:27.577521 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:33:27 crc kubenswrapper[4867]: I0126 11:33:27.582657 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgkn2"] Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.213879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.226254 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b3db26-bd1e-4178-ad15-3fb849d16a6c-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2\" (UID: \"b2b3db26-bd1e-4178-ad15-3fb849d16a6c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.264144 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.416699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.417817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.426713 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-webhook-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.431122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc30069e-52ed-46a5-9dc9-4558c856149e-metrics-certs\") pod \"openstack-operator-controller-manager-7d65646bb4-6hkx8\" (UID: \"dc30069e-52ed-46a5-9dc9-4558c856149e\") " pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:28 crc kubenswrapper[4867]: I0126 11:33:28.696877 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:28 crc kubenswrapper[4867]: E0126 11:33:28.708722 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.27:5001/openstack-k8s-operators/ironic-operator:74553cfd3006c1a6d2d0b45bc4080575f1104155" Jan 26 11:33:28 crc kubenswrapper[4867]: E0126 11:33:28.708831 4867 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.27:5001/openstack-k8s-operators/ironic-operator:74553cfd3006c1a6d2d0b45bc4080575f1104155" Jan 26 11:33:28 crc kubenswrapper[4867]: E0126 11:33:28.709066 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.27:5001/openstack-k8s-operators/ironic-operator:74553cfd3006c1a6d2d0b45bc4080575f1104155,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqldv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598d88d885-fjpln_openstack-operators(242c7502-97f2-4ac9-96ba-17b04f96a5b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:28 crc kubenswrapper[4867]: E0126 11:33:28.710356 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" podUID="242c7502-97f2-4ac9-96ba-17b04f96a5b5" Jan 26 11:33:29 crc kubenswrapper[4867]: E0126 11:33:29.290349 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 11:33:29 crc kubenswrapper[4867]: E0126 11:33:29.290655 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2sg96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-jjlnx_openstack-operators(829c6c7e-cc19-4f6d-a350-dea6f26f3436): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:29 crc kubenswrapper[4867]: E0126 11:33:29.291904 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" podUID="829c6c7e-cc19-4f6d-a350-dea6f26f3436" Jan 26 11:33:29 crc kubenswrapper[4867]: E0126 11:33:29.390437 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.27:5001/openstack-k8s-operators/ironic-operator:74553cfd3006c1a6d2d0b45bc4080575f1104155\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" podUID="242c7502-97f2-4ac9-96ba-17b04f96a5b5" Jan 26 11:33:29 crc kubenswrapper[4867]: E0126 11:33:29.390508 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" podUID="829c6c7e-cc19-4f6d-a350-dea6f26f3436" Jan 26 11:33:30 crc kubenswrapper[4867]: E0126 11:33:30.317025 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 11:33:30 crc kubenswrapper[4867]: E0126 11:33:30.317320 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4qdjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-gthnl_openstack-operators(4f33548d-3a14-41f4-8447-feb86b7cf366): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:30 crc kubenswrapper[4867]: E0126 11:33:30.318495 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" podUID="4f33548d-3a14-41f4-8447-feb86b7cf366" Jan 26 11:33:30 crc kubenswrapper[4867]: E0126 11:33:30.396159 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" podUID="4f33548d-3a14-41f4-8447-feb86b7cf366" Jan 26 11:33:31 crc kubenswrapper[4867]: E0126 11:33:31.306909 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 11:33:31 crc kubenswrapper[4867]: E0126 11:33:31.308178 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grwfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-z7djp_openstack-operators(99737677-080c-4f1a-aa91-e5162fe5f25d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:31 crc kubenswrapper[4867]: E0126 11:33:31.309462 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" podUID="99737677-080c-4f1a-aa91-e5162fe5f25d" Jan 26 11:33:31 crc kubenswrapper[4867]: E0126 11:33:31.406427 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" podUID="99737677-080c-4f1a-aa91-e5162fe5f25d" Jan 26 11:33:32 crc kubenswrapper[4867]: E0126 11:33:32.030675 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 11:33:32 crc kubenswrapper[4867]: E0126 11:33:32.030895 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmc4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-tzb4g_openstack-operators(9da13f82-2fca-4922-8b27-b11d702897ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:32 crc kubenswrapper[4867]: E0126 11:33:32.032329 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" podUID="9da13f82-2fca-4922-8b27-b11d702897ff" Jan 26 11:33:32 crc kubenswrapper[4867]: E0126 11:33:32.414760 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" podUID="9da13f82-2fca-4922-8b27-b11d702897ff" Jan 26 11:33:32 crc kubenswrapper[4867]: E0126 11:33:32.755635 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 11:33:32 crc kubenswrapper[4867]: E0126 11:33:32.755944 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjzq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-v4pfk_openstack-operators(de2f9a68-7384-47b5-a16d-da28e04440de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:32 crc kubenswrapper[4867]: E0126 11:33:32.757244 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" podUID="de2f9a68-7384-47b5-a16d-da28e04440de" Jan 26 11:33:33 crc kubenswrapper[4867]: E0126 11:33:33.383258 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 11:33:33 crc kubenswrapper[4867]: E0126 11:33:33.383496 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqrj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-rsv5q_openstack-operators(bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:33 crc kubenswrapper[4867]: E0126 11:33:33.384970 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" podUID="bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975" Jan 26 11:33:33 crc kubenswrapper[4867]: E0126 11:33:33.421409 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" podUID="de2f9a68-7384-47b5-a16d-da28e04440de" Jan 26 11:33:33 crc kubenswrapper[4867]: E0126 11:33:33.911578 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 11:33:33 crc kubenswrapper[4867]: E0126 11:33:33.912184 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9sfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lcn9l_openstack-operators(ccccb13a-d387-4515-83c6-ea24a070a12e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:33 crc kubenswrapper[4867]: E0126 11:33:33.913368 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" podUID="ccccb13a-d387-4515-83c6-ea24a070a12e" Jan 26 11:33:34 crc kubenswrapper[4867]: E0126 11:33:34.429489 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" podUID="ccccb13a-d387-4515-83c6-ea24a070a12e" Jan 26 11:33:34 crc kubenswrapper[4867]: E0126 11:33:34.478342 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 26 11:33:34 crc kubenswrapper[4867]: E0126 11:33:34.478627 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2jf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-c7klk_openstack-operators(10f19670-4fbf-42ee-b54c-5317af0b0c00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:34 crc kubenswrapper[4867]: E0126 11:33:34.479832 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" podUID="10f19670-4fbf-42ee-b54c-5317af0b0c00" Jan 26 11:33:35 crc kubenswrapper[4867]: E0126 11:33:35.144611 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 26 11:33:35 crc kubenswrapper[4867]: E0126 11:33:35.144829 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjddt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-n6zwx_openstack-operators(4009a85d-3728-420e-b7db-70f8b41587ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:33:35 crc kubenswrapper[4867]: E0126 11:33:35.146333 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" podUID="4009a85d-3728-420e-b7db-70f8b41587ff" Jan 26 11:33:35 crc kubenswrapper[4867]: I0126 11:33:35.439551 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgkn2" event={"ID":"de1a4ed9-07d2-4d13-9680-23d361cfff3f","Type":"ContainerStarted","Data":"2f7899b9fad31deff243600eff391256dce0b53b223c90c7f57ed41177ee1041"} Jan 26 11:33:35 crc kubenswrapper[4867]: I0126 11:33:35.761108 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2"] Jan 26 11:33:35 crc kubenswrapper[4867]: I0126 11:33:35.843713 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-758868c854-chnbm"] Jan 26 11:33:35 crc kubenswrapper[4867]: I0126 11:33:35.864633 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8"] Jan 26 11:33:35 crc kubenswrapper[4867]: W0126 11:33:35.877094 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc30069e_52ed_46a5_9dc9_4558c856149e.slice/crio-54946bcc3029c0f18fbbaef53a2ad8d2adf70448d411d5250bc6d7b456dd2b0b WatchSource:0}: Error finding container 54946bcc3029c0f18fbbaef53a2ad8d2adf70448d411d5250bc6d7b456dd2b0b: Status 404 returned error can't find the container with id 54946bcc3029c0f18fbbaef53a2ad8d2adf70448d411d5250bc6d7b456dd2b0b Jan 26 11:33:35 crc kubenswrapper[4867]: I0126 11:33:35.889515 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z5vj"] Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.293633 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.293695 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.293738 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.294303 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4568ef927141a7a2944fe130fff11fd99ada292de5ff857f1ccce612a5d941d"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.294375 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://f4568ef927141a7a2944fe130fff11fd99ada292de5ff857f1ccce612a5d941d" gracePeriod=600 Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.455274 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" event={"ID":"10ae2757-3e84-4ad1-8459-fca684db2964","Type":"ContainerStarted","Data":"88a98af38b23610c6083656d068a4e4f6c6cb52dcf05e2b7d39096e3b9b094d2"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.456104 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.465257 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" event={"ID":"34c3c36b-d905-4349-8909-bd15951aca68","Type":"ContainerStarted","Data":"c1dcdd9841c9fe5f0ff3638d2502d5f3346b41848f93c53e591b38c2c8d8ecb3"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.465512 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.466951 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" event={"ID":"1dce245d-cfd7-440a-9797-2e8c05641673","Type":"ContainerStarted","Data":"231da9669b68382a9ddde585dfbf14e652ea6528f571d490feb536e867907783"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.469665 4867 generic.go:334] "Generic (PLEG): container finished" podID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerID="e508bdf6afaf27b048ea3afe7df0c754a885231a1f39c9c9ac29e52aaec7ca8d" exitCode=0 Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.469770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z5vj" event={"ID":"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331","Type":"ContainerDied","Data":"e508bdf6afaf27b048ea3afe7df0c754a885231a1f39c9c9ac29e52aaec7ca8d"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.469832 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z5vj" event={"ID":"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331","Type":"ContainerStarted","Data":"475321f77b21183a3b9970077207ff9c1e0871cf5c015e678c6a6a24877b508b"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.488083 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" podStartSLOduration=3.073173151 podStartE2EDuration="41.488058178s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:56.744012966 +0000 UTC m=+926.442587876" lastFinishedPulling="2026-01-26 11:33:35.158897993 +0000 UTC m=+964.857472903" observedRunningTime="2026-01-26 11:33:36.48263157 +0000 UTC m=+966.181206480" watchObservedRunningTime="2026-01-26 11:33:36.488058178 +0000 UTC m=+966.186633088" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.520373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" event={"ID":"b2b3db26-bd1e-4178-ad15-3fb849d16a6c","Type":"ContainerStarted","Data":"6af30a92b2838c84a7997a3fc44b59aabaf8963b79d920d9f1fed8a95a6272b9"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.536024 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" event={"ID":"dc30069e-52ed-46a5-9dc9-4558c856149e","Type":"ContainerStarted","Data":"ca84cd092bce0a89b8751c7abf39fd24ddf2a5d7902d0e22f9ab252c5eb144cf"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.536116 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" event={"ID":"dc30069e-52ed-46a5-9dc9-4558c856149e","Type":"ContainerStarted","Data":"54946bcc3029c0f18fbbaef53a2ad8d2adf70448d411d5250bc6d7b456dd2b0b"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.536284 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.559757 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" event={"ID":"ee79b4ff-ed5f-4660-9d36-2fd0c1840f84","Type":"ContainerStarted","Data":"320cd4214230ea3142c2f553e029dbbc679aef335f655189c502b7ff9d91f6e5"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.560045 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.606102 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" podStartSLOduration=3.054852346 podStartE2EDuration="41.606082687s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:56.626584674 +0000 UTC m=+926.325159584" lastFinishedPulling="2026-01-26 11:33:35.177815015 +0000 UTC m=+964.876389925" observedRunningTime="2026-01-26 11:33:36.565877295 +0000 UTC m=+966.264452205" watchObservedRunningTime="2026-01-26 11:33:36.606082687 +0000 UTC m=+966.304657597" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.610972 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.611010 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" event={"ID":"2034ae77-372d-473a-b038-83ee4c3720c0","Type":"ContainerStarted","Data":"543f84dceeaf19da3bc3b84340a4337022fdfb9d0f8198b4137cf03e2b3f1803"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.620585 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" event={"ID":"073c6f18-4275-4233-8308-39307e2cc0c7","Type":"ContainerStarted","Data":"6ce1fd98f3b58c68e050f71dd4faa2f0aa644da8c04d27603678c9d389d9902d"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.621410 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.624131 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" podStartSLOduration=40.624116963 podStartE2EDuration="40.624116963s" podCreationTimestamp="2026-01-26 11:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:33:36.623808024 +0000 UTC m=+966.322382934" watchObservedRunningTime="2026-01-26 11:33:36.624116963 +0000 UTC m=+966.322691873" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.633159 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="f4568ef927141a7a2944fe130fff11fd99ada292de5ff857f1ccce612a5d941d" exitCode=0 Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.633250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"f4568ef927141a7a2944fe130fff11fd99ada292de5ff857f1ccce612a5d941d"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.633292 4867 scope.go:117] "RemoveContainer" containerID="3d80268128b8588b5243ae8da874837feaca71a462cb1a50fe2432786b4b83de" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.641254 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" event={"ID":"b1c6af74-51a5-45bb-afed-9b8b19a5c7df","Type":"ContainerStarted","Data":"ecd46abb841792dded287d29513b72670f4a9082c73313928a82d5b43c62dde8"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.641985 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.643605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" event={"ID":"5402225a-cbc7-4b7c-8036-9b8159baee31","Type":"ContainerStarted","Data":"3dfc90ab4e6cc53f2c10a4e9444543c77e374a805d8d73c9e615e2715b0e3af0"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.644056 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.646249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" event={"ID":"c9a978c7-9efb-43dc-830c-31020be6121a","Type":"ContainerStarted","Data":"fd7d1d9999a169fa6d4461a7ca22a7cd1b78c99e2bade2d5571f6652be2234e3"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.646509 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.647768 4867 generic.go:334] "Generic (PLEG): container finished" podID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerID="e74513fd968875347bd298b3be56a8467e27f80b879aae028cc361e8c8ad0001" exitCode=0 Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.647808 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgkn2" event={"ID":"de1a4ed9-07d2-4d13-9680-23d361cfff3f","Type":"ContainerDied","Data":"e74513fd968875347bd298b3be56a8467e27f80b879aae028cc361e8c8ad0001"} Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.669845 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" podStartSLOduration=3.939003181 podStartE2EDuration="41.669816544s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.81409169 +0000 UTC m=+927.512666600" lastFinishedPulling="2026-01-26 11:33:35.544905053 +0000 UTC m=+965.243479963" observedRunningTime="2026-01-26 11:33:36.667012812 +0000 UTC m=+966.365587722" watchObservedRunningTime="2026-01-26 11:33:36.669816544 +0000 UTC m=+966.368391454" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.711091 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" podStartSLOduration=3.980635405 podStartE2EDuration="41.711068097s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.81443889 +0000 UTC m=+927.513013800" lastFinishedPulling="2026-01-26 11:33:35.544871582 +0000 UTC m=+965.243446492" observedRunningTime="2026-01-26 11:33:36.698736178 +0000 UTC m=+966.397311088" watchObservedRunningTime="2026-01-26 11:33:36.711068097 +0000 UTC m=+966.409643007" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.729259 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" podStartSLOduration=4.129069231 podStartE2EDuration="41.729231156s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.573235371 +0000 UTC m=+927.271810281" lastFinishedPulling="2026-01-26 11:33:35.173397296 +0000 UTC m=+964.871972206" observedRunningTime="2026-01-26 11:33:36.725854767 +0000 UTC m=+966.424429677" watchObservedRunningTime="2026-01-26 11:33:36.729231156 +0000 UTC m=+966.427806076" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.751212 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" podStartSLOduration=3.439639259 podStartE2EDuration="41.751190486s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.229498774 +0000 UTC m=+926.928073684" lastFinishedPulling="2026-01-26 11:33:35.541050001 +0000 UTC m=+965.239624911" observedRunningTime="2026-01-26 11:33:36.745304284 +0000 UTC m=+966.443879204" watchObservedRunningTime="2026-01-26 11:33:36.751190486 +0000 UTC m=+966.449765396" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.813714 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" podStartSLOduration=4.46256101 podStartE2EDuration="41.813684517s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.812973568 +0000 UTC m=+927.511548478" lastFinishedPulling="2026-01-26 11:33:35.164097075 +0000 UTC m=+964.862671985" observedRunningTime="2026-01-26 11:33:36.809821094 +0000 UTC m=+966.508396014" watchObservedRunningTime="2026-01-26 11:33:36.813684517 +0000 UTC m=+966.512259437" Jan 26 11:33:36 crc kubenswrapper[4867]: I0126 11:33:36.845764 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" podStartSLOduration=3.835642939 podStartE2EDuration="41.845734541s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.531463683 +0000 UTC m=+927.230038593" lastFinishedPulling="2026-01-26 11:33:35.541555285 +0000 UTC m=+965.240130195" observedRunningTime="2026-01-26 11:33:36.845199916 +0000 UTC m=+966.543774826" watchObservedRunningTime="2026-01-26 11:33:36.845734541 +0000 UTC m=+966.544309451" Jan 26 11:33:38 crc kubenswrapper[4867]: I0126 11:33:38.989918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"510e7b8815f2e10ccb07bd14d3cace2ddac464c7ed9719497ae9e906b65ef061"} Jan 26 11:33:38 crc kubenswrapper[4867]: I0126 11:33:38.990868 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" event={"ID":"799c2d45-a054-4971-a87e-ad3b620cb2c5","Type":"ContainerStarted","Data":"9ce17aa688b5ba7a44004fa99b7915993adef5ff3003fcefc66f69e2e64b2bb4"} Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.708333 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" event={"ID":"3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3","Type":"ContainerStarted","Data":"3f1baa7e16ffc8885cfdd5af76075e1b880283edd46a0e24ff2c1b99a43e3e53"} Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.709799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.734054 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lmvxw"] Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.736955 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.750721 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" podStartSLOduration=6.479384593 podStartE2EDuration="44.750693526s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:58.022276337 +0000 UTC m=+927.720851247" lastFinishedPulling="2026-01-26 11:33:36.29358527 +0000 UTC m=+965.992160180" observedRunningTime="2026-01-26 11:33:39.74358827 +0000 UTC m=+969.442163200" watchObservedRunningTime="2026-01-26 11:33:39.750693526 +0000 UTC m=+969.449268436" Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.798491 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmvxw"] Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.863151 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" podStartSLOduration=2.308337982 podStartE2EDuration="44.863120743s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:56.743624295 +0000 UTC m=+926.442199195" lastFinishedPulling="2026-01-26 11:33:39.298407046 +0000 UTC m=+968.996981956" observedRunningTime="2026-01-26 11:33:39.848740854 +0000 UTC m=+969.547315764" watchObservedRunningTime="2026-01-26 11:33:39.863120743 +0000 UTC m=+969.561695653" Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.955567 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-utilities\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.955746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-catalog-content\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:39 crc kubenswrapper[4867]: I0126 11:33:39.955826 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djw78\" (UniqueName: \"kubernetes.io/projected/4fa58c85-36dd-48d6-afb9-665f70796e4c-kube-api-access-djw78\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.057445 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-utilities\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.057546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-catalog-content\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.057571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djw78\" (UniqueName: \"kubernetes.io/projected/4fa58c85-36dd-48d6-afb9-665f70796e4c-kube-api-access-djw78\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.058477 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-utilities\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.058714 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-catalog-content\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.093166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djw78\" (UniqueName: \"kubernetes.io/projected/4fa58c85-36dd-48d6-afb9-665f70796e4c-kube-api-access-djw78\") pod \"redhat-marketplace-lmvxw\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.103754 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.639783 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmvxw"] Jan 26 11:33:40 crc kubenswrapper[4867]: W0126 11:33:40.678676 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fa58c85_36dd_48d6_afb9_665f70796e4c.slice/crio-64ef4d0226835139546000cc46b02c7ef284f7b3cfe1618d5f21374c9a00bab7 WatchSource:0}: Error finding container 64ef4d0226835139546000cc46b02c7ef284f7b3cfe1618d5f21374c9a00bab7: Status 404 returned error can't find the container with id 64ef4d0226835139546000cc46b02c7ef284f7b3cfe1618d5f21374c9a00bab7 Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.724009 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmvxw" event={"ID":"4fa58c85-36dd-48d6-afb9-665f70796e4c","Type":"ContainerStarted","Data":"64ef4d0226835139546000cc46b02c7ef284f7b3cfe1618d5f21374c9a00bab7"} Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.727648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z5vj" event={"ID":"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331","Type":"ContainerStarted","Data":"f358ddb26cc3fc434a3670b35bddf4bd9a57bc5316363ebf0db4cb3923092a67"} Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.730949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgkn2" event={"ID":"de1a4ed9-07d2-4d13-9680-23d361cfff3f","Type":"ContainerStarted","Data":"ed8d18f7bf66ce8251e198bcf79b70408738896a7ce348ffc605eae41e53bfbd"} Jan 26 11:33:40 crc kubenswrapper[4867]: I0126 11:33:40.744929 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:33:41 crc kubenswrapper[4867]: I0126 11:33:41.742399 4867 generic.go:334] "Generic (PLEG): container finished" podID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerID="d9548e3182cf9305915ebdbf7c4979441702311a226311d8780e3ebe3fa7a25f" exitCode=0 Jan 26 11:33:41 crc kubenswrapper[4867]: I0126 11:33:41.742507 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmvxw" event={"ID":"4fa58c85-36dd-48d6-afb9-665f70796e4c","Type":"ContainerDied","Data":"d9548e3182cf9305915ebdbf7c4979441702311a226311d8780e3ebe3fa7a25f"} Jan 26 11:33:41 crc kubenswrapper[4867]: I0126 11:33:41.745853 4867 generic.go:334] "Generic (PLEG): container finished" podID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerID="f358ddb26cc3fc434a3670b35bddf4bd9a57bc5316363ebf0db4cb3923092a67" exitCode=0 Jan 26 11:33:41 crc kubenswrapper[4867]: I0126 11:33:41.745926 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z5vj" event={"ID":"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331","Type":"ContainerDied","Data":"f358ddb26cc3fc434a3670b35bddf4bd9a57bc5316363ebf0db4cb3923092a67"} Jan 26 11:33:41 crc kubenswrapper[4867]: I0126 11:33:41.748268 4867 generic.go:334] "Generic (PLEG): container finished" podID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerID="ed8d18f7bf66ce8251e198bcf79b70408738896a7ce348ffc605eae41e53bfbd" exitCode=0 Jan 26 11:33:41 crc kubenswrapper[4867]: I0126 11:33:41.748323 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgkn2" event={"ID":"de1a4ed9-07d2-4d13-9680-23d361cfff3f","Type":"ContainerDied","Data":"ed8d18f7bf66ce8251e198bcf79b70408738896a7ce348ffc605eae41e53bfbd"} Jan 26 11:33:42 crc kubenswrapper[4867]: I0126 11:33:42.767356 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgkn2" event={"ID":"de1a4ed9-07d2-4d13-9680-23d361cfff3f","Type":"ContainerStarted","Data":"2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.787091 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z5vj" event={"ID":"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331","Type":"ContainerStarted","Data":"5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.791651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" event={"ID":"b2b3db26-bd1e-4178-ad15-3fb849d16a6c","Type":"ContainerStarted","Data":"e368acb73c2261d18943ce37bbec3cbe483e144051db265275eea2c8ebb93078"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.792378 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.795239 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" event={"ID":"1dce245d-cfd7-440a-9797-2e8c05641673","Type":"ContainerStarted","Data":"7244e4d9431d290f22a5254b746ed9dc491b533df74ac2087c369bb201269773"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.795509 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.796910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" event={"ID":"4f33548d-3a14-41f4-8447-feb86b7cf366","Type":"ContainerStarted","Data":"37790c3933d4779347d3fa176bccc479d284daf18875a93c9155546091240b10"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.797310 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.799055 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" event={"ID":"242c7502-97f2-4ac9-96ba-17b04f96a5b5","Type":"ContainerStarted","Data":"8da824e6ba67bd485263bed791b7b1c824ae9b967a5f2ddf26abeb37d000571b"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.799246 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.800648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" event={"ID":"99737677-080c-4f1a-aa91-e5162fe5f25d","Type":"ContainerStarted","Data":"3b3681c0ceb47b4872475dab88786e18a0aa2e7c3ebb95c16cdc5402a6456e1b"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.800872 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.802759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmvxw" event={"ID":"4fa58c85-36dd-48d6-afb9-665f70796e4c","Type":"ContainerStarted","Data":"9f023e4d3282de69bd9544f8931cb0ccce455b490370fc297e85b326eb483e90"} Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.815969 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8z5vj" podStartSLOduration=19.275348504 podStartE2EDuration="24.815945242s" podCreationTimestamp="2026-01-26 11:33:20 +0000 UTC" firstStartedPulling="2026-01-26 11:33:36.471690271 +0000 UTC m=+966.170265181" lastFinishedPulling="2026-01-26 11:33:42.012287009 +0000 UTC m=+971.710861919" observedRunningTime="2026-01-26 11:33:44.813522773 +0000 UTC m=+974.512097693" watchObservedRunningTime="2026-01-26 11:33:44.815945242 +0000 UTC m=+974.514520152" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.840397 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" podStartSLOduration=41.431647268 podStartE2EDuration="49.840368961s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:33:35.870264794 +0000 UTC m=+965.568839704" lastFinishedPulling="2026-01-26 11:33:44.278986477 +0000 UTC m=+973.977561397" observedRunningTime="2026-01-26 11:33:44.839146871 +0000 UTC m=+974.537721781" watchObservedRunningTime="2026-01-26 11:33:44.840368961 +0000 UTC m=+974.538943871" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.877116 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgkn2" podStartSLOduration=40.388087212 podStartE2EDuration="44.877098111s" podCreationTimestamp="2026-01-26 11:33:00 +0000 UTC" firstStartedPulling="2026-01-26 11:33:36.649255215 +0000 UTC m=+966.347830125" lastFinishedPulling="2026-01-26 11:33:41.138266114 +0000 UTC m=+970.836841024" observedRunningTime="2026-01-26 11:33:44.876854723 +0000 UTC m=+974.575429633" watchObservedRunningTime="2026-01-26 11:33:44.877098111 +0000 UTC m=+974.575673021" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.909676 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" podStartSLOduration=3.454203319 podStartE2EDuration="49.909645364s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.818964362 +0000 UTC m=+927.517539272" lastFinishedPulling="2026-01-26 11:33:44.274406407 +0000 UTC m=+973.972981317" observedRunningTime="2026-01-26 11:33:44.901912351 +0000 UTC m=+974.600487261" watchObservedRunningTime="2026-01-26 11:33:44.909645364 +0000 UTC m=+974.608220274" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.948895 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" podStartSLOduration=2.74280919 podStartE2EDuration="49.948866086s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.073282112 +0000 UTC m=+926.771857012" lastFinishedPulling="2026-01-26 11:33:44.279338988 +0000 UTC m=+973.977913908" observedRunningTime="2026-01-26 11:33:44.943119168 +0000 UTC m=+974.641694088" watchObservedRunningTime="2026-01-26 11:33:44.948866086 +0000 UTC m=+974.647441016" Jan 26 11:33:44 crc kubenswrapper[4867]: I0126 11:33:44.984146 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" podStartSLOduration=41.492225299 podStartE2EDuration="49.984124008s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:33:35.782529448 +0000 UTC m=+965.481104358" lastFinishedPulling="2026-01-26 11:33:44.274428157 +0000 UTC m=+973.973003067" observedRunningTime="2026-01-26 11:33:44.980371705 +0000 UTC m=+974.678946635" watchObservedRunningTime="2026-01-26 11:33:44.984124008 +0000 UTC m=+974.682698918" Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.058325 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" podStartSLOduration=3.55817096 podStartE2EDuration="50.058294152s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.77872768 +0000 UTC m=+927.477302600" lastFinishedPulling="2026-01-26 11:33:44.278850882 +0000 UTC m=+973.977425792" observedRunningTime="2026-01-26 11:33:45.045416411 +0000 UTC m=+974.743991321" watchObservedRunningTime="2026-01-26 11:33:45.058294152 +0000 UTC m=+974.756869072" Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.546425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-rgg4g" Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.820136 4867 generic.go:334] "Generic (PLEG): container finished" podID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerID="9f023e4d3282de69bd9544f8931cb0ccce455b490370fc297e85b326eb483e90" exitCode=0 Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.820236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmvxw" event={"ID":"4fa58c85-36dd-48d6-afb9-665f70796e4c","Type":"ContainerDied","Data":"9f023e4d3282de69bd9544f8931cb0ccce455b490370fc297e85b326eb483e90"} Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.831294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" event={"ID":"829c6c7e-cc19-4f6d-a350-dea6f26f3436","Type":"ContainerStarted","Data":"ad2513f810223dbacb4314c4157e616a2934ad3d4c5eb98ecda3fa836ed72c5d"} Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.839950 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.858118 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-5s6fg" Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.883273 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" podStartSLOduration=3.9572820269999998 podStartE2EDuration="50.883148665s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:58.065423514 +0000 UTC m=+927.763998424" lastFinishedPulling="2026-01-26 11:33:44.991290152 +0000 UTC m=+974.689865062" observedRunningTime="2026-01-26 11:33:45.878182933 +0000 UTC m=+975.576757843" watchObservedRunningTime="2026-01-26 11:33:45.883148665 +0000 UTC m=+975.581723575" Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.884431 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ccp9p" Jan 26 11:33:45 crc kubenswrapper[4867]: I0126 11:33:45.968473 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-8w8hc" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.052391 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gh4fm" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.057902 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-pgqvv" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.225406 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-khq8w" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.292364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wz989" Jan 26 11:33:46 crc kubenswrapper[4867]: E0126 11:33:46.566808 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" podUID="bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.576327 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.628278 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-r7pf7" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.831048 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-df52v" Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.841406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" event={"ID":"de2f9a68-7384-47b5-a16d-da28e04440de","Type":"ContainerStarted","Data":"35a22b6a7940525c8b3b798eb398f0756cda1c187a3a27ef19d6195b0f5003eb"} Jan 26 11:33:46 crc kubenswrapper[4867]: I0126 11:33:46.899162 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" podStartSLOduration=4.173991846 podStartE2EDuration="51.899135664s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.836781181 +0000 UTC m=+927.535356091" lastFinishedPulling="2026-01-26 11:33:45.561924999 +0000 UTC m=+975.260499909" observedRunningTime="2026-01-26 11:33:46.871487141 +0000 UTC m=+976.570062051" watchObservedRunningTime="2026-01-26 11:33:46.899135664 +0000 UTC m=+976.597710584" Jan 26 11:33:47 crc kubenswrapper[4867]: I0126 11:33:47.852138 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmvxw" event={"ID":"4fa58c85-36dd-48d6-afb9-665f70796e4c","Type":"ContainerStarted","Data":"d54bc2f8b29b6f21e63569634083f508f71ea3b79fd412d53e8904bbd0cb0134"} Jan 26 11:33:47 crc kubenswrapper[4867]: I0126 11:33:47.882285 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lmvxw" podStartSLOduration=4.155740393 podStartE2EDuration="8.882204118s" podCreationTimestamp="2026-01-26 11:33:39 +0000 UTC" firstStartedPulling="2026-01-26 11:33:42.012311589 +0000 UTC m=+971.710886499" lastFinishedPulling="2026-01-26 11:33:46.738775314 +0000 UTC m=+976.437350224" observedRunningTime="2026-01-26 11:33:47.872725708 +0000 UTC m=+977.571300638" watchObservedRunningTime="2026-01-26 11:33:47.882204118 +0000 UTC m=+977.580779038" Jan 26 11:33:48 crc kubenswrapper[4867]: E0126 11:33:48.577595 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" podUID="4009a85d-3728-420e-b7db-70f8b41587ff" Jan 26 11:33:48 crc kubenswrapper[4867]: I0126 11:33:48.706811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d65646bb4-6hkx8" Jan 26 11:33:49 crc kubenswrapper[4867]: E0126 11:33:49.566558 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" podUID="10f19670-4fbf-42ee-b54c-5317af0b0c00" Jan 26 11:33:50 crc kubenswrapper[4867]: I0126 11:33:50.103966 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:50 crc kubenswrapper[4867]: I0126 11:33:50.104051 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:50 crc kubenswrapper[4867]: I0126 11:33:50.164706 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:33:50 crc kubenswrapper[4867]: I0126 11:33:50.731699 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:50 crc kubenswrapper[4867]: I0126 11:33:50.731759 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:50 crc kubenswrapper[4867]: I0126 11:33:50.776419 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:50 crc kubenswrapper[4867]: I0126 11:33:50.914889 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:33:51 crc kubenswrapper[4867]: I0126 11:33:51.235355 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:51 crc kubenswrapper[4867]: I0126 11:33:51.236555 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:51 crc kubenswrapper[4867]: I0126 11:33:51.275427 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:51 crc kubenswrapper[4867]: I0126 11:33:51.933870 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:33:52 crc kubenswrapper[4867]: I0126 11:33:52.640772 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z5vj"] Jan 26 11:33:52 crc kubenswrapper[4867]: I0126 11:33:52.893508 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8z5vj" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="registry-server" containerID="cri-o://5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358" gracePeriod=2 Jan 26 11:33:53 crc kubenswrapper[4867]: I0126 11:33:53.644438 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgkn2"] Jan 26 11:33:53 crc kubenswrapper[4867]: I0126 11:33:53.904736 4867 generic.go:334] "Generic (PLEG): container finished" podID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerID="5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358" exitCode=0 Jan 26 11:33:53 crc kubenswrapper[4867]: I0126 11:33:53.904803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z5vj" event={"ID":"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331","Type":"ContainerDied","Data":"5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358"} Jan 26 11:33:54 crc kubenswrapper[4867]: I0126 11:33:54.913128 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgkn2" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="registry-server" containerID="cri-o://2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb" gracePeriod=2 Jan 26 11:33:55 crc kubenswrapper[4867]: I0126 11:33:55.925384 4867 generic.go:334] "Generic (PLEG): container finished" podID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerID="2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb" exitCode=0 Jan 26 11:33:55 crc kubenswrapper[4867]: I0126 11:33:55.925442 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgkn2" event={"ID":"de1a4ed9-07d2-4d13-9680-23d361cfff3f","Type":"ContainerDied","Data":"2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb"} Jan 26 11:33:55 crc kubenswrapper[4867]: I0126 11:33:55.969305 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gthnl" Jan 26 11:33:56 crc kubenswrapper[4867]: I0126 11:33:56.107517 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598d88d885-fjpln" Jan 26 11:33:56 crc kubenswrapper[4867]: I0126 11:33:56.140694 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" Jan 26 11:33:56 crc kubenswrapper[4867]: I0126 11:33:56.143794 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-v4pfk" Jan 26 11:33:56 crc kubenswrapper[4867]: I0126 11:33:56.291638 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-z7djp" Jan 26 11:33:56 crc kubenswrapper[4867]: I0126 11:33:56.572529 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-jjlnx" Jan 26 11:33:57 crc kubenswrapper[4867]: I0126 11:33:57.584208 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-758868c854-chnbm" Jan 26 11:33:58 crc kubenswrapper[4867]: I0126 11:33:58.271157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2" Jan 26 11:34:00 crc kubenswrapper[4867]: I0126 11:34:00.173832 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:34:00 crc kubenswrapper[4867]: I0126 11:34:00.222968 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmvxw"] Jan 26 11:34:00 crc kubenswrapper[4867]: E0126 11:34:00.730945 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358 is running failed: container process not found" containerID="5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:34:00 crc kubenswrapper[4867]: E0126 11:34:00.731815 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358 is running failed: container process not found" containerID="5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:34:00 crc kubenswrapper[4867]: E0126 11:34:00.732245 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358 is running failed: container process not found" containerID="5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:34:00 crc kubenswrapper[4867]: E0126 11:34:00.732337 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-8z5vj" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="registry-server" Jan 26 11:34:00 crc kubenswrapper[4867]: I0126 11:34:00.961270 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lmvxw" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="registry-server" containerID="cri-o://d54bc2f8b29b6f21e63569634083f508f71ea3b79fd412d53e8904bbd0cb0134" gracePeriod=2 Jan 26 11:34:01 crc kubenswrapper[4867]: E0126 11:34:01.236185 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb is running failed: container process not found" containerID="2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:34:01 crc kubenswrapper[4867]: E0126 11:34:01.236837 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb is running failed: container process not found" containerID="2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:34:01 crc kubenswrapper[4867]: E0126 11:34:01.237490 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb is running failed: container process not found" containerID="2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 11:34:01 crc kubenswrapper[4867]: E0126 11:34:01.237649 4867 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-cgkn2" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="registry-server" Jan 26 11:34:02 crc kubenswrapper[4867]: I0126 11:34:02.983212 4867 generic.go:334] "Generic (PLEG): container finished" podID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerID="d54bc2f8b29b6f21e63569634083f508f71ea3b79fd412d53e8904bbd0cb0134" exitCode=0 Jan 26 11:34:02 crc kubenswrapper[4867]: I0126 11:34:02.983727 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmvxw" event={"ID":"4fa58c85-36dd-48d6-afb9-665f70796e4c","Type":"ContainerDied","Data":"d54bc2f8b29b6f21e63569634083f508f71ea3b79fd412d53e8904bbd0cb0134"} Jan 26 11:34:02 crc kubenswrapper[4867]: I0126 11:34:02.987749 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z5vj" event={"ID":"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331","Type":"ContainerDied","Data":"475321f77b21183a3b9970077207ff9c1e0871cf5c015e678c6a6a24877b508b"} Jan 26 11:34:02 crc kubenswrapper[4867]: I0126 11:34:02.987785 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475321f77b21183a3b9970077207ff9c1e0871cf5c015e678c6a6a24877b508b" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.002369 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.065055 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.122096 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.161869 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rntn6\" (UniqueName: \"kubernetes.io/projected/de1a4ed9-07d2-4d13-9680-23d361cfff3f-kube-api-access-rntn6\") pod \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.161918 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-utilities\") pod \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.162060 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-utilities\") pod \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.162096 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-catalog-content\") pod \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\" (UID: \"de1a4ed9-07d2-4d13-9680-23d361cfff3f\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.162143 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwrrf\" (UniqueName: \"kubernetes.io/projected/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-kube-api-access-fwrrf\") pod \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.162173 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-catalog-content\") pod \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\" (UID: \"41669cf5-c7b5-4c4f-a6a7-e9e3b4322331\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.165171 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-utilities" (OuterVolumeSpecName: "utilities") pod "41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" (UID: "41669cf5-c7b5-4c4f-a6a7-e9e3b4322331"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.167404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-utilities" (OuterVolumeSpecName: "utilities") pod "de1a4ed9-07d2-4d13-9680-23d361cfff3f" (UID: "de1a4ed9-07d2-4d13-9680-23d361cfff3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.174241 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1a4ed9-07d2-4d13-9680-23d361cfff3f-kube-api-access-rntn6" (OuterVolumeSpecName: "kube-api-access-rntn6") pod "de1a4ed9-07d2-4d13-9680-23d361cfff3f" (UID: "de1a4ed9-07d2-4d13-9680-23d361cfff3f"). InnerVolumeSpecName "kube-api-access-rntn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.182332 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-kube-api-access-fwrrf" (OuterVolumeSpecName: "kube-api-access-fwrrf") pod "41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" (UID: "41669cf5-c7b5-4c4f-a6a7-e9e3b4322331"). InnerVolumeSpecName "kube-api-access-fwrrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.229382 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de1a4ed9-07d2-4d13-9680-23d361cfff3f" (UID: "de1a4ed9-07d2-4d13-9680-23d361cfff3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.241623 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" (UID: "41669cf5-c7b5-4c4f-a6a7-e9e3b4322331"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.263624 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-catalog-content\") pod \"4fa58c85-36dd-48d6-afb9-665f70796e4c\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.263850 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-utilities\") pod \"4fa58c85-36dd-48d6-afb9-665f70796e4c\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.264774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djw78\" (UniqueName: \"kubernetes.io/projected/4fa58c85-36dd-48d6-afb9-665f70796e4c-kube-api-access-djw78\") pod \"4fa58c85-36dd-48d6-afb9-665f70796e4c\" (UID: \"4fa58c85-36dd-48d6-afb9-665f70796e4c\") " Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.264696 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-utilities" (OuterVolumeSpecName: "utilities") pod "4fa58c85-36dd-48d6-afb9-665f70796e4c" (UID: "4fa58c85-36dd-48d6-afb9-665f70796e4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.266108 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.266350 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.266442 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwrrf\" (UniqueName: \"kubernetes.io/projected/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-kube-api-access-fwrrf\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.266531 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.266606 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rntn6\" (UniqueName: \"kubernetes.io/projected/de1a4ed9-07d2-4d13-9680-23d361cfff3f-kube-api-access-rntn6\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.266680 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1a4ed9-07d2-4d13-9680-23d361cfff3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.266760 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.270140 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa58c85-36dd-48d6-afb9-665f70796e4c-kube-api-access-djw78" (OuterVolumeSpecName: "kube-api-access-djw78") pod "4fa58c85-36dd-48d6-afb9-665f70796e4c" (UID: "4fa58c85-36dd-48d6-afb9-665f70796e4c"). InnerVolumeSpecName "kube-api-access-djw78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.294161 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fa58c85-36dd-48d6-afb9-665f70796e4c" (UID: "4fa58c85-36dd-48d6-afb9-665f70796e4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.368107 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa58c85-36dd-48d6-afb9-665f70796e4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:03 crc kubenswrapper[4867]: I0126 11:34:03.368452 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djw78\" (UniqueName: \"kubernetes.io/projected/4fa58c85-36dd-48d6-afb9-665f70796e4c-kube-api-access-djw78\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.002360 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgkn2" event={"ID":"de1a4ed9-07d2-4d13-9680-23d361cfff3f","Type":"ContainerDied","Data":"2f7899b9fad31deff243600eff391256dce0b53b223c90c7f57ed41177ee1041"} Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.003070 4867 scope.go:117] "RemoveContainer" containerID="2df405f5503775b40979dc0b4c1d9f2d7d1481c8dca9f8d0385cb067fea2e6bb" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.002743 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgkn2" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.005576 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" event={"ID":"bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975","Type":"ContainerStarted","Data":"598bf41949c0ff41ff74f7e21dee962116ac14071f02050af31d95e1a947ddac"} Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.005830 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.007658 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" event={"ID":"10f19670-4fbf-42ee-b54c-5317af0b0c00","Type":"ContainerStarted","Data":"a59bf2eac1d3bb7ccd0ee0fb1ff32fa285689e0aba44e0d7bf58304bebb901da"} Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.007878 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.012179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" event={"ID":"ccccb13a-d387-4515-83c6-ea24a070a12e","Type":"ContainerStarted","Data":"8352192d98fcbcbad0678780f9fdc268b78927e1eb3af629cf82144d25126d9d"} Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.019730 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmvxw" event={"ID":"4fa58c85-36dd-48d6-afb9-665f70796e4c","Type":"ContainerDied","Data":"64ef4d0226835139546000cc46b02c7ef284f7b3cfe1618d5f21374c9a00bab7"} Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.019797 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmvxw" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.024570 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" event={"ID":"9da13f82-2fca-4922-8b27-b11d702897ff","Type":"ContainerStarted","Data":"63033c65bee5d39c10bca8049e47435084cc752a7dafc745041a266271b9a49b"} Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.024664 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z5vj" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.026476 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" podStartSLOduration=3.936739824 podStartE2EDuration="1m9.026456079s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:58.071359237 +0000 UTC m=+927.769934147" lastFinishedPulling="2026-01-26 11:34:03.161075492 +0000 UTC m=+992.859650402" observedRunningTime="2026-01-26 11:34:04.02526498 +0000 UTC m=+993.723839900" watchObservedRunningTime="2026-01-26 11:34:04.026456079 +0000 UTC m=+993.725030989" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.027357 4867 scope.go:117] "RemoveContainer" containerID="ed8d18f7bf66ce8251e198bcf79b70408738896a7ce348ffc605eae41e53bfbd" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.044827 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" podStartSLOduration=4.0493556569999996 podStartE2EDuration="1m9.044800458s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.813244135 +0000 UTC m=+927.511819045" lastFinishedPulling="2026-01-26 11:34:02.808688936 +0000 UTC m=+992.507263846" observedRunningTime="2026-01-26 11:34:04.039136573 +0000 UTC m=+993.737711493" watchObservedRunningTime="2026-01-26 11:34:04.044800458 +0000 UTC m=+993.743375378" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.074779 4867 scope.go:117] "RemoveContainer" containerID="e74513fd968875347bd298b3be56a8467e27f80b879aae028cc361e8c8ad0001" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.096114 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcn9l" podStartSLOduration=3.118144411 podStartE2EDuration="1m8.096088325s" podCreationTimestamp="2026-01-26 11:32:56 +0000 UTC" firstStartedPulling="2026-01-26 11:32:57.828869161 +0000 UTC m=+927.527444071" lastFinishedPulling="2026-01-26 11:34:02.806813075 +0000 UTC m=+992.505387985" observedRunningTime="2026-01-26 11:34:04.066308001 +0000 UTC m=+993.764882921" watchObservedRunningTime="2026-01-26 11:34:04.096088325 +0000 UTC m=+993.794663235" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.100811 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgkn2"] Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.113614 4867 scope.go:117] "RemoveContainer" containerID="d54bc2f8b29b6f21e63569634083f508f71ea3b79fd412d53e8904bbd0cb0134" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.123661 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgkn2"] Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.124909 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" podStartSLOduration=3.99751331 podStartE2EDuration="1m9.124891936s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:58.105527103 +0000 UTC m=+927.804102013" lastFinishedPulling="2026-01-26 11:34:03.232905729 +0000 UTC m=+992.931480639" observedRunningTime="2026-01-26 11:34:04.118333132 +0000 UTC m=+993.816908062" watchObservedRunningTime="2026-01-26 11:34:04.124891936 +0000 UTC m=+993.823466846" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.138406 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmvxw"] Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.141409 4867 scope.go:117] "RemoveContainer" containerID="9f023e4d3282de69bd9544f8931cb0ccce455b490370fc297e85b326eb483e90" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.146832 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmvxw"] Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.160395 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z5vj"] Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.164687 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8z5vj"] Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.181906 4867 scope.go:117] "RemoveContainer" containerID="d9548e3182cf9305915ebdbf7c4979441702311a226311d8780e3ebe3fa7a25f" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.576757 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" path="/var/lib/kubelet/pods/41669cf5-c7b5-4c4f-a6a7-e9e3b4322331/volumes" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.578347 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" path="/var/lib/kubelet/pods/4fa58c85-36dd-48d6-afb9-665f70796e4c/volumes" Jan 26 11:34:04 crc kubenswrapper[4867]: I0126 11:34:04.579091 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" path="/var/lib/kubelet/pods/de1a4ed9-07d2-4d13-9680-23d361cfff3f/volumes" Jan 26 11:34:05 crc kubenswrapper[4867]: I0126 11:34:05.039551 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" event={"ID":"4009a85d-3728-420e-b7db-70f8b41587ff","Type":"ContainerStarted","Data":"400c7eb42d5eff6797a411576c26bcd453f953b33469b0e590335a501bae1f67"} Jan 26 11:34:05 crc kubenswrapper[4867]: I0126 11:34:05.040164 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" Jan 26 11:34:05 crc kubenswrapper[4867]: I0126 11:34:05.055334 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" podStartSLOduration=4.056312444 podStartE2EDuration="1m10.055322039s" podCreationTimestamp="2026-01-26 11:32:55 +0000 UTC" firstStartedPulling="2026-01-26 11:32:58.076541608 +0000 UTC m=+927.775116508" lastFinishedPulling="2026-01-26 11:34:04.075551193 +0000 UTC m=+993.774126103" observedRunningTime="2026-01-26 11:34:05.053761859 +0000 UTC m=+994.752336789" watchObservedRunningTime="2026-01-26 11:34:05.055322039 +0000 UTC m=+994.753896949" Jan 26 11:34:06 crc kubenswrapper[4867]: I0126 11:34:06.125437 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" Jan 26 11:34:16 crc kubenswrapper[4867]: I0126 11:34:16.127649 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tzb4g" Jan 26 11:34:16 crc kubenswrapper[4867]: I0126 11:34:16.551351 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-rsv5q" Jan 26 11:34:16 crc kubenswrapper[4867]: I0126 11:34:16.630050 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-n6zwx" Jan 26 11:34:16 crc kubenswrapper[4867]: I0126 11:34:16.705126 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c7klk" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.611714 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn8dc"] Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612662 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="extract-content" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612678 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="extract-content" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612691 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612697 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612715 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="extract-content" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612722 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="extract-content" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612737 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612745 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612755 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="extract-utilities" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612761 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="extract-utilities" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612771 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612777 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612785 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="extract-utilities" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612791 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="extract-utilities" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612805 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="extract-content" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612812 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="extract-content" Jan 26 11:34:32 crc kubenswrapper[4867]: E0126 11:34:32.612834 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="extract-utilities" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612860 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="extract-utilities" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.612989 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1a4ed9-07d2-4d13-9680-23d361cfff3f" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.613001 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa58c85-36dd-48d6-afb9-665f70796e4c" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.613011 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="41669cf5-c7b5-4c4f-a6a7-e9e3b4322331" containerName="registry-server" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.613865 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.619254 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-c4gfp" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.619535 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.619961 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.620379 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.625184 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn8dc"] Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.673651 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mg6lh"] Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.675499 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.686873 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mg6lh"] Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.689907 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.762628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/5f6ccb06-6dc8-4285-b7f9-f2038c528872-kube-api-access-59trw\") pod \"dnsmasq-dns-675f4bcbfc-pn8dc\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.762728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6ccb06-6dc8-4285-b7f9-f2038c528872-config\") pod \"dnsmasq-dns-675f4bcbfc-pn8dc\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.864301 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/5f6ccb06-6dc8-4285-b7f9-f2038c528872-kube-api-access-59trw\") pod \"dnsmasq-dns-675f4bcbfc-pn8dc\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.864880 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6ccb06-6dc8-4285-b7f9-f2038c528872-config\") pod \"dnsmasq-dns-675f4bcbfc-pn8dc\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.865421 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.865513 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-config\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.865629 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hs4r\" (UniqueName: \"kubernetes.io/projected/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-kube-api-access-7hs4r\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.865749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6ccb06-6dc8-4285-b7f9-f2038c528872-config\") pod \"dnsmasq-dns-675f4bcbfc-pn8dc\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.886669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/5f6ccb06-6dc8-4285-b7f9-f2038c528872-kube-api-access-59trw\") pod \"dnsmasq-dns-675f4bcbfc-pn8dc\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.937322 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.967663 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hs4r\" (UniqueName: \"kubernetes.io/projected/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-kube-api-access-7hs4r\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.968017 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.968113 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-config\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.969947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.969971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-config\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:32 crc kubenswrapper[4867]: I0126 11:34:32.989041 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hs4r\" (UniqueName: \"kubernetes.io/projected/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-kube-api-access-7hs4r\") pod \"dnsmasq-dns-78dd6ddcc-mg6lh\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:33 crc kubenswrapper[4867]: I0126 11:34:33.003106 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:33 crc kubenswrapper[4867]: I0126 11:34:33.513456 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn8dc"] Jan 26 11:34:33 crc kubenswrapper[4867]: I0126 11:34:33.518965 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mg6lh"] Jan 26 11:34:34 crc kubenswrapper[4867]: I0126 11:34:34.251550 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" event={"ID":"5f6ccb06-6dc8-4285-b7f9-f2038c528872","Type":"ContainerStarted","Data":"1b78a60508ff3e1f6106b790677684a2fcc473686256c3d737a43415c79793c1"} Jan 26 11:34:34 crc kubenswrapper[4867]: I0126 11:34:34.253228 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" event={"ID":"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0","Type":"ContainerStarted","Data":"31fd3c698e4613aa93e644aed6b4a19c12a4eff71eb6bab9b5e238a7b1ef80c3"} Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.540117 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn8dc"] Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.580745 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wg9wq"] Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.582873 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.592682 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wg9wq"] Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.717876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-config\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.717971 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.718086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5wt\" (UniqueName: \"kubernetes.io/projected/e52b8a49-afec-4527-8728-f2b53c33cd94-kube-api-access-9b5wt\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.820425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5wt\" (UniqueName: \"kubernetes.io/projected/e52b8a49-afec-4527-8728-f2b53c33cd94-kube-api-access-9b5wt\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.820560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-config\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.821903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.823084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-config\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.820597 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.877302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5wt\" (UniqueName: \"kubernetes.io/projected/e52b8a49-afec-4527-8728-f2b53c33cd94-kube-api-access-9b5wt\") pod \"dnsmasq-dns-666b6646f7-wg9wq\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.922070 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mg6lh"] Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.947927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.955312 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-47hhv"] Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.962389 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:35 crc kubenswrapper[4867]: I0126 11:34:35.977322 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-47hhv"] Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.130401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.130514 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-config\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.130614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vk7s\" (UniqueName: \"kubernetes.io/projected/2af057d6-8429-47ba-9433-2a3ee9ffd26c-kube-api-access-8vk7s\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.232122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-config\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.232209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vk7s\" (UniqueName: \"kubernetes.io/projected/2af057d6-8429-47ba-9433-2a3ee9ffd26c-kube-api-access-8vk7s\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.232312 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.233460 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.233911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-config\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.278505 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vk7s\" (UniqueName: \"kubernetes.io/projected/2af057d6-8429-47ba-9433-2a3ee9ffd26c-kube-api-access-8vk7s\") pod \"dnsmasq-dns-57d769cc4f-47hhv\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.295325 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.581275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wg9wq"] Jan 26 11:34:36 crc kubenswrapper[4867]: W0126 11:34:36.600315 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode52b8a49_afec_4527_8728_f2b53c33cd94.slice/crio-9bf3f9558f3c02766a654176f501a7b0a35675ba1cde288cc7d14da0a5f62abe WatchSource:0}: Error finding container 9bf3f9558f3c02766a654176f501a7b0a35675ba1cde288cc7d14da0a5f62abe: Status 404 returned error can't find the container with id 9bf3f9558f3c02766a654176f501a7b0a35675ba1cde288cc7d14da0a5f62abe Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.783277 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.786161 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.789501 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9lq9k" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.789737 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.789880 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.790167 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.790239 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.790178 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.798957 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.832467 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.892791 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-47hhv"] Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k886p\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-kube-api-access-k886p\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951084 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951314 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951438 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951517 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951543 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951582 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951610 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951671 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.951743 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-config-data\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:36 crc kubenswrapper[4867]: I0126 11:34:36.952165 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054432 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054504 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054539 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054599 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054655 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054707 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-config-data\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054745 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k886p\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-kube-api-access-k886p\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.054855 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.055099 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.055277 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.056015 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-config-data\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.056850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.057157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.057189 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.063059 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.063698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.063761 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.077993 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.081776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k886p\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-kube-api-access-k886p\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.083288 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.116882 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.120114 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.125148 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wbvp9" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.125416 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.125553 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.125685 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.125813 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.126897 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.127309 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.139185 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.141561 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.258670 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e582495-d650-404c-9a13-d28ea98ecbc5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259266 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259695 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259736 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.259899 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e582495-d650-404c-9a13-d28ea98ecbc5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.260003 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.262804 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-kube-api-access-s5m6m\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.323347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" event={"ID":"2af057d6-8429-47ba-9433-2a3ee9ffd26c","Type":"ContainerStarted","Data":"cdf9c48c41fa88fb83a9563e34a7d19cfa69d5e015340934cff1cc66808dce02"} Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.326276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" event={"ID":"e52b8a49-afec-4527-8728-f2b53c33cd94","Type":"ContainerStarted","Data":"9bf3f9558f3c02766a654176f501a7b0a35675ba1cde288cc7d14da0a5f62abe"} Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364505 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364577 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-kube-api-access-s5m6m\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364623 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e582495-d650-404c-9a13-d28ea98ecbc5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364685 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364798 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.364887 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e582495-d650-404c-9a13-d28ea98ecbc5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.365002 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.366915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.367077 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.367998 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.371641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.371840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.378086 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e582495-d650-404c-9a13-d28ea98ecbc5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.393490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.398745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.401518 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-kube-api-access-s5m6m\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.402553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e582495-d650-404c-9a13-d28ea98ecbc5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.424616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.516880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:34:37 crc kubenswrapper[4867]: I0126 11:34:37.675758 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.039197 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.041151 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.058299 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.062540 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jh2t4" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.063755 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.065452 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.065788 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.069528 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.076900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9305cd67-bbb5-45e9-ab35-6a34a717dff8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.077007 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.102383 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.102518 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.102545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9305cd67-bbb5-45e9-ab35-6a34a717dff8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.102774 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtxgc\" (UniqueName: \"kubernetes.io/projected/9305cd67-bbb5-45e9-ab35-6a34a717dff8-kube-api-access-jtxgc\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.102809 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9305cd67-bbb5-45e9-ab35-6a34a717dff8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.102962 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.157714 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.205982 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtxgc\" (UniqueName: \"kubernetes.io/projected/9305cd67-bbb5-45e9-ab35-6a34a717dff8-kube-api-access-jtxgc\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.206276 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9305cd67-bbb5-45e9-ab35-6a34a717dff8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.206385 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.206685 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9305cd67-bbb5-45e9-ab35-6a34a717dff8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.206728 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.206755 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.206788 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.206808 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9305cd67-bbb5-45e9-ab35-6a34a717dff8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.211272 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.211896 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9305cd67-bbb5-45e9-ab35-6a34a717dff8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.212928 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.213714 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.215797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9305cd67-bbb5-45e9-ab35-6a34a717dff8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.235958 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9305cd67-bbb5-45e9-ab35-6a34a717dff8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.242508 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9305cd67-bbb5-45e9-ab35-6a34a717dff8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.266434 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.276919 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtxgc\" (UniqueName: \"kubernetes.io/projected/9305cd67-bbb5-45e9-ab35-6a34a717dff8-kube-api-access-jtxgc\") pod \"openstack-galera-0\" (UID: \"9305cd67-bbb5-45e9-ab35-6a34a717dff8\") " pod="openstack/openstack-galera-0" Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.346987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a","Type":"ContainerStarted","Data":"832e3b538427b998adfa059ea07c5b40cef09c3004e17851f91573c4f9289936"} Jan 26 11:34:38 crc kubenswrapper[4867]: I0126 11:34:38.411925 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.354276 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.355734 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.359049 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.359364 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.359487 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.363306 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rb7sw" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.363790 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.429050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd3b4566-15b8-4c50-bc5e-76c5a6907311-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.429125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.429173 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.429235 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3b4566-15b8-4c50-bc5e-76c5a6907311-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.430769 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3b4566-15b8-4c50-bc5e-76c5a6907311-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.430841 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.431066 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.431155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-269wc\" (UniqueName: \"kubernetes.io/projected/fd3b4566-15b8-4c50-bc5e-76c5a6907311-kube-api-access-269wc\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534528 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534590 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3b4566-15b8-4c50-bc5e-76c5a6907311-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3b4566-15b8-4c50-bc5e-76c5a6907311-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534750 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-269wc\" (UniqueName: \"kubernetes.io/projected/fd3b4566-15b8-4c50-bc5e-76c5a6907311-kube-api-access-269wc\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd3b4566-15b8-4c50-bc5e-76c5a6907311-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.534832 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.536570 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.536749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd3b4566-15b8-4c50-bc5e-76c5a6907311-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.536587 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.537243 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.541617 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd3b4566-15b8-4c50-bc5e-76c5a6907311-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.548143 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd3b4566-15b8-4c50-bc5e-76c5a6907311-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.566337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3b4566-15b8-4c50-bc5e-76c5a6907311-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.568338 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-269wc\" (UniqueName: \"kubernetes.io/projected/fd3b4566-15b8-4c50-bc5e-76c5a6907311-kube-api-access-269wc\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.571436 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd3b4566-15b8-4c50-bc5e-76c5a6907311\") " pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.684517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.724979 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.726352 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.728485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.728847 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.739488 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb361900-eda0-4cb4-8838-4267b465353b-config-data\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.739567 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb361900-eda0-4cb4-8838-4267b465353b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.739632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww87v\" (UniqueName: \"kubernetes.io/projected/eb361900-eda0-4cb4-8838-4267b465353b-kube-api-access-ww87v\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.739676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb361900-eda0-4cb4-8838-4267b465353b-kolla-config\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.739704 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb361900-eda0-4cb4-8838-4267b465353b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.740196 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-v82mx" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.814277 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.842341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb361900-eda0-4cb4-8838-4267b465353b-kolla-config\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.842391 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb361900-eda0-4cb4-8838-4267b465353b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.842464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb361900-eda0-4cb4-8838-4267b465353b-config-data\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.842779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb361900-eda0-4cb4-8838-4267b465353b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.844081 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww87v\" (UniqueName: \"kubernetes.io/projected/eb361900-eda0-4cb4-8838-4267b465353b-kube-api-access-ww87v\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.845147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb361900-eda0-4cb4-8838-4267b465353b-config-data\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.848570 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb361900-eda0-4cb4-8838-4267b465353b-kolla-config\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.864924 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb361900-eda0-4cb4-8838-4267b465353b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.873891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww87v\" (UniqueName: \"kubernetes.io/projected/eb361900-eda0-4cb4-8838-4267b465353b-kube-api-access-ww87v\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:39 crc kubenswrapper[4867]: I0126 11:34:39.873920 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb361900-eda0-4cb4-8838-4267b465353b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb361900-eda0-4cb4-8838-4267b465353b\") " pod="openstack/memcached-0" Jan 26 11:34:40 crc kubenswrapper[4867]: I0126 11:34:40.062163 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.622445 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.624009 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.627477 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vtxmj" Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.647437 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.679256 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsjph\" (UniqueName: \"kubernetes.io/projected/f08d1721-01b8-4573-8446-18ae794fb9e7-kube-api-access-tsjph\") pod \"kube-state-metrics-0\" (UID: \"f08d1721-01b8-4573-8446-18ae794fb9e7\") " pod="openstack/kube-state-metrics-0" Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.783262 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsjph\" (UniqueName: \"kubernetes.io/projected/f08d1721-01b8-4573-8446-18ae794fb9e7-kube-api-access-tsjph\") pod \"kube-state-metrics-0\" (UID: \"f08d1721-01b8-4573-8446-18ae794fb9e7\") " pod="openstack/kube-state-metrics-0" Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.826796 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsjph\" (UniqueName: \"kubernetes.io/projected/f08d1721-01b8-4573-8446-18ae794fb9e7-kube-api-access-tsjph\") pod \"kube-state-metrics-0\" (UID: \"f08d1721-01b8-4573-8446-18ae794fb9e7\") " pod="openstack/kube-state-metrics-0" Jan 26 11:34:41 crc kubenswrapper[4867]: I0126 11:34:41.945704 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.340870 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hbpxr"] Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.342960 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.348080 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tfv2f" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.348541 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.348792 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.353615 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4f5h4"] Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.354415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-run\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.354506 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db65f713-855b-4ca7-b989-ebde989474ce-ovn-controller-tls-certs\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.354568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db65f713-855b-4ca7-b989-ebde989474ce-scripts\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.354600 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db65f713-855b-4ca7-b989-ebde989474ce-combined-ca-bundle\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.354636 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-run-ovn\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.354710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kwx\" (UniqueName: \"kubernetes.io/projected/db65f713-855b-4ca7-b989-ebde989474ce-kube-api-access-j6kwx\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.354733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-log-ovn\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.355800 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.384450 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hbpxr"] Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.395919 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4f5h4"] Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456459 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-run\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db65f713-855b-4ca7-b989-ebde989474ce-ovn-controller-tls-certs\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-log\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/211a1bec-4387-4bbf-a034-56dd9396676d-scripts\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456647 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgpv\" (UniqueName: \"kubernetes.io/projected/211a1bec-4387-4bbf-a034-56dd9396676d-kube-api-access-8bgpv\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db65f713-855b-4ca7-b989-ebde989474ce-scripts\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-lib\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db65f713-855b-4ca7-b989-ebde989474ce-combined-ca-bundle\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456775 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-run-ovn\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456809 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-etc-ovs\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kwx\" (UniqueName: \"kubernetes.io/projected/db65f713-855b-4ca7-b989-ebde989474ce-kube-api-access-j6kwx\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-log-ovn\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.456914 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-run\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.457068 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-run\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.457178 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-run-ovn\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.457952 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db65f713-855b-4ca7-b989-ebde989474ce-var-log-ovn\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.461999 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db65f713-855b-4ca7-b989-ebde989474ce-scripts\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.463944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db65f713-855b-4ca7-b989-ebde989474ce-combined-ca-bundle\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.467710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db65f713-855b-4ca7-b989-ebde989474ce-ovn-controller-tls-certs\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.501721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kwx\" (UniqueName: \"kubernetes.io/projected/db65f713-855b-4ca7-b989-ebde989474ce-kube-api-access-j6kwx\") pod \"ovn-controller-hbpxr\" (UID: \"db65f713-855b-4ca7-b989-ebde989474ce\") " pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.510445 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2e582495-d650-404c-9a13-d28ea98ecbc5","Type":"ContainerStarted","Data":"ea9b63542120a636abe1b7d6d1b0befd7465eee31a9c5478d8bfb8cc991bba19"} Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559351 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-log\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559456 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/211a1bec-4387-4bbf-a034-56dd9396676d-scripts\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559480 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bgpv\" (UniqueName: \"kubernetes.io/projected/211a1bec-4387-4bbf-a034-56dd9396676d-kube-api-access-8bgpv\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559510 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-lib\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559552 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-etc-ovs\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559597 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-run\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559691 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-log\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559752 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-run\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559873 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-var-lib\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.559963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/211a1bec-4387-4bbf-a034-56dd9396676d-etc-ovs\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.564860 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/211a1bec-4387-4bbf-a034-56dd9396676d-scripts\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.602984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bgpv\" (UniqueName: \"kubernetes.io/projected/211a1bec-4387-4bbf-a034-56dd9396676d-kube-api-access-8bgpv\") pod \"ovn-controller-ovs-4f5h4\" (UID: \"211a1bec-4387-4bbf-a034-56dd9396676d\") " pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.685517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hbpxr" Jan 26 11:34:45 crc kubenswrapper[4867]: I0126 11:34:45.693106 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.221311 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.224134 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.229100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cqldw" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.229372 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.229739 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.230185 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.235944 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.248036 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.316944 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.316995 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f25dc5-093b-4b0a-b1fa-290241e9bccc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.317034 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.317077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.317106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.317138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f25dc5-093b-4b0a-b1fa-290241e9bccc-config\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.317161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28f25dc5-093b-4b0a-b1fa-290241e9bccc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.317180 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p99nc\" (UniqueName: \"kubernetes.io/projected/28f25dc5-093b-4b0a-b1fa-290241e9bccc-kube-api-access-p99nc\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.418794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.418891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f25dc5-093b-4b0a-b1fa-290241e9bccc-config\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.418932 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28f25dc5-093b-4b0a-b1fa-290241e9bccc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.418962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p99nc\" (UniqueName: \"kubernetes.io/projected/28f25dc5-093b-4b0a-b1fa-290241e9bccc-kube-api-access-p99nc\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.419012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.419033 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f25dc5-093b-4b0a-b1fa-290241e9bccc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.419071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.419124 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.422564 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28f25dc5-093b-4b0a-b1fa-290241e9bccc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.422564 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28f25dc5-093b-4b0a-b1fa-290241e9bccc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.423323 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.423580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.424118 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f25dc5-093b-4b0a-b1fa-290241e9bccc-config\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.425083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.429962 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f25dc5-093b-4b0a-b1fa-290241e9bccc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.441375 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p99nc\" (UniqueName: \"kubernetes.io/projected/28f25dc5-093b-4b0a-b1fa-290241e9bccc-kube-api-access-p99nc\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.446984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"28f25dc5-093b-4b0a-b1fa-290241e9bccc\") " pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:46 crc kubenswrapper[4867]: I0126 11:34:46.559022 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.238638 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.241577 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.245192 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.245417 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6vjdw" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.249028 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.256990 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.268701 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.375950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.376029 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.376081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.376124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fccd97-ac62-4d86-971f-59e4fc780888-config\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.376147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24fccd97-ac62-4d86-971f-59e4fc780888-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.376163 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.376190 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpw6f\" (UniqueName: \"kubernetes.io/projected/24fccd97-ac62-4d86-971f-59e4fc780888-kube-api-access-jpw6f\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.376233 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24fccd97-ac62-4d86-971f-59e4fc780888-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478724 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fccd97-ac62-4d86-971f-59e4fc780888-config\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24fccd97-ac62-4d86-971f-59e4fc780888-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpw6f\" (UniqueName: \"kubernetes.io/projected/24fccd97-ac62-4d86-971f-59e4fc780888-kube-api-access-jpw6f\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.478934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24fccd97-ac62-4d86-971f-59e4fc780888-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.479273 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.479919 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24fccd97-ac62-4d86-971f-59e4fc780888-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.480294 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fccd97-ac62-4d86-971f-59e4fc780888-config\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.481111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24fccd97-ac62-4d86-971f-59e4fc780888-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.487946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.491965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.493987 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24fccd97-ac62-4d86-971f-59e4fc780888-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.497271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpw6f\" (UniqueName: \"kubernetes.io/projected/24fccd97-ac62-4d86-971f-59e4fc780888-kube-api-access-jpw6f\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.502616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"24fccd97-ac62-4d86-971f-59e4fc780888\") " pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:49 crc kubenswrapper[4867]: I0126 11:34:49.569461 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 11:34:52 crc kubenswrapper[4867]: I0126 11:34:52.833355 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 11:34:56 crc kubenswrapper[4867]: W0126 11:34:56.712602 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb361900_eda0_4cb4_8838_4267b465353b.slice/crio-4ddedb50b3a9419a17d989e72b797510e08215724ab725309a4d7fe4e6aa84b7 WatchSource:0}: Error finding container 4ddedb50b3a9419a17d989e72b797510e08215724ab725309a4d7fe4e6aa84b7: Status 404 returned error can't find the container with id 4ddedb50b3a9419a17d989e72b797510e08215724ab725309a4d7fe4e6aa84b7 Jan 26 11:34:57 crc kubenswrapper[4867]: I0126 11:34:57.124631 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.572718 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.573464 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59trw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pn8dc_openstack(5f6ccb06-6dc8-4285-b7f9-f2038c528872): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.576591 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" podUID="5f6ccb06-6dc8-4285-b7f9-f2038c528872" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.626304 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.626509 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hs4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mg6lh_openstack(9eb2e642-fab7-48f7-84dc-544a6bf1e9d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.627695 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" podUID="9eb2e642-fab7-48f7-84dc-544a6bf1e9d0" Jan 26 11:34:57 crc kubenswrapper[4867]: I0126 11:34:57.628430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb361900-eda0-4cb4-8838-4267b465353b","Type":"ContainerStarted","Data":"4ddedb50b3a9419a17d989e72b797510e08215724ab725309a4d7fe4e6aa84b7"} Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.680285 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.680505 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9b5wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-wg9wq_openstack(e52b8a49-afec-4527-8728-f2b53c33cd94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.681861 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" podUID="e52b8a49-afec-4527-8728-f2b53c33cd94" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.708901 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.709131 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vk7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-47hhv_openstack(2af057d6-8429-47ba-9433-2a3ee9ffd26c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:34:57 crc kubenswrapper[4867]: E0126 11:34:57.710428 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" podUID="2af057d6-8429-47ba-9433-2a3ee9ffd26c" Jan 26 11:34:58 crc kubenswrapper[4867]: E0126 11:34:58.644039 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" podUID="2af057d6-8429-47ba-9433-2a3ee9ffd26c" Jan 26 11:34:58 crc kubenswrapper[4867]: E0126 11:34:58.689307 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" podUID="e52b8a49-afec-4527-8728-f2b53c33cd94" Jan 26 11:34:58 crc kubenswrapper[4867]: I0126 11:34:58.913490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:58 crc kubenswrapper[4867]: I0126 11:34:58.981924 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6ccb06-6dc8-4285-b7f9-f2038c528872-config\") pod \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " Jan 26 11:34:58 crc kubenswrapper[4867]: I0126 11:34:58.982202 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/5f6ccb06-6dc8-4285-b7f9-f2038c528872-kube-api-access-59trw\") pod \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\" (UID: \"5f6ccb06-6dc8-4285-b7f9-f2038c528872\") " Jan 26 11:34:58 crc kubenswrapper[4867]: I0126 11:34:58.982686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f6ccb06-6dc8-4285-b7f9-f2038c528872-config" (OuterVolumeSpecName: "config") pod "5f6ccb06-6dc8-4285-b7f9-f2038c528872" (UID: "5f6ccb06-6dc8-4285-b7f9-f2038c528872"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:34:58 crc kubenswrapper[4867]: I0126 11:34:58.984092 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6ccb06-6dc8-4285-b7f9-f2038c528872-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:58 crc kubenswrapper[4867]: I0126 11:34:58.998742 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6ccb06-6dc8-4285-b7f9-f2038c528872-kube-api-access-59trw" (OuterVolumeSpecName: "kube-api-access-59trw") pod "5f6ccb06-6dc8-4285-b7f9-f2038c528872" (UID: "5f6ccb06-6dc8-4285-b7f9-f2038c528872"). InnerVolumeSpecName "kube-api-access-59trw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.086113 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59trw\" (UniqueName: \"kubernetes.io/projected/5f6ccb06-6dc8-4285-b7f9-f2038c528872-kube-api-access-59trw\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.165410 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.190065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-config\") pod \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.190279 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hs4r\" (UniqueName: \"kubernetes.io/projected/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-kube-api-access-7hs4r\") pod \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.190313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-dns-svc\") pod \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\" (UID: \"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0\") " Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.190891 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9eb2e642-fab7-48f7-84dc-544a6bf1e9d0" (UID: "9eb2e642-fab7-48f7-84dc-544a6bf1e9d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.191185 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-config" (OuterVolumeSpecName: "config") pod "9eb2e642-fab7-48f7-84dc-544a6bf1e9d0" (UID: "9eb2e642-fab7-48f7-84dc-544a6bf1e9d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.196276 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-kube-api-access-7hs4r" (OuterVolumeSpecName: "kube-api-access-7hs4r") pod "9eb2e642-fab7-48f7-84dc-544a6bf1e9d0" (UID: "9eb2e642-fab7-48f7-84dc-544a6bf1e9d0"). InnerVolumeSpecName "kube-api-access-7hs4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.244426 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.292380 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hs4r\" (UniqueName: \"kubernetes.io/projected/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-kube-api-access-7hs4r\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.292425 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.292436 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:34:59 crc kubenswrapper[4867]: W0126 11:34:59.310968 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3b4566_15b8_4c50_bc5e_76c5a6907311.slice/crio-6726a17192e6708b889cdfa07d85743a1b0ddca85e55f334eec54a2b454ac42a WatchSource:0}: Error finding container 6726a17192e6708b889cdfa07d85743a1b0ddca85e55f334eec54a2b454ac42a: Status 404 returned error can't find the container with id 6726a17192e6708b889cdfa07d85743a1b0ddca85e55f334eec54a2b454ac42a Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.545608 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.561925 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hbpxr"] Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.576490 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.650984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" event={"ID":"9eb2e642-fab7-48f7-84dc-544a6bf1e9d0","Type":"ContainerDied","Data":"31fd3c698e4613aa93e644aed6b4a19c12a4eff71eb6bab9b5e238a7b1ef80c3"} Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.651085 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mg6lh" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.657748 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f08d1721-01b8-4573-8446-18ae794fb9e7","Type":"ContainerStarted","Data":"8ff218a158ea8a439be43553bdf0f1b067bc09aae4d59b9731655f2bcd977df1"} Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.659433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" event={"ID":"5f6ccb06-6dc8-4285-b7f9-f2038c528872","Type":"ContainerDied","Data":"1b78a60508ff3e1f6106b790677684a2fcc473686256c3d737a43415c79793c1"} Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.659456 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pn8dc" Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.665190 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 11:34:59 crc kubenswrapper[4867]: I0126 11:34:59.668471 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd3b4566-15b8-4c50-bc5e-76c5a6907311","Type":"ContainerStarted","Data":"6726a17192e6708b889cdfa07d85743a1b0ddca85e55f334eec54a2b454ac42a"} Jan 26 11:34:59 crc kubenswrapper[4867]: W0126 11:34:59.884891 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9305cd67_bbb5_45e9_ab35_6a34a717dff8.slice/crio-baa6b79db88a087845b3593a40108fc4ca23ff4e9d2a4710dbb75d1d4cf9cbe0 WatchSource:0}: Error finding container baa6b79db88a087845b3593a40108fc4ca23ff4e9d2a4710dbb75d1d4cf9cbe0: Status 404 returned error can't find the container with id baa6b79db88a087845b3593a40108fc4ca23ff4e9d2a4710dbb75d1d4cf9cbe0 Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.108456 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mg6lh"] Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.131331 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mg6lh"] Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.148365 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn8dc"] Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.156240 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pn8dc"] Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.581460 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6ccb06-6dc8-4285-b7f9-f2038c528872" path="/var/lib/kubelet/pods/5f6ccb06-6dc8-4285-b7f9-f2038c528872/volumes" Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.581941 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb2e642-fab7-48f7-84dc-544a6bf1e9d0" path="/var/lib/kubelet/pods/9eb2e642-fab7-48f7-84dc-544a6bf1e9d0/volumes" Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.621496 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4f5h4"] Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.685701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hbpxr" event={"ID":"db65f713-855b-4ca7-b989-ebde989474ce","Type":"ContainerStarted","Data":"e1089c3319c70315bd9ec480286171376a9317fe7f70538295497db4f4976cb5"} Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.687477 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9305cd67-bbb5-45e9-ab35-6a34a717dff8","Type":"ContainerStarted","Data":"baa6b79db88a087845b3593a40108fc4ca23ff4e9d2a4710dbb75d1d4cf9cbe0"} Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.689141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a","Type":"ContainerStarted","Data":"b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda"} Jan 26 11:35:00 crc kubenswrapper[4867]: I0126 11:35:00.691178 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2e582495-d650-404c-9a13-d28ea98ecbc5","Type":"ContainerStarted","Data":"226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5"} Jan 26 11:35:01 crc kubenswrapper[4867]: W0126 11:35:01.043090 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24fccd97_ac62_4d86_971f_59e4fc780888.slice/crio-dfdf6e559bd22c5700888e766c1e1cc2ebd9860a0bd6353fcf3d4a9565c286b4 WatchSource:0}: Error finding container dfdf6e559bd22c5700888e766c1e1cc2ebd9860a0bd6353fcf3d4a9565c286b4: Status 404 returned error can't find the container with id dfdf6e559bd22c5700888e766c1e1cc2ebd9860a0bd6353fcf3d4a9565c286b4 Jan 26 11:35:01 crc kubenswrapper[4867]: W0126 11:35:01.051358 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f25dc5_093b_4b0a_b1fa_290241e9bccc.slice/crio-bc145bec2ce658abaced9a16504e869993a6dc6d0d39550d87b67a1fa42ea058 WatchSource:0}: Error finding container bc145bec2ce658abaced9a16504e869993a6dc6d0d39550d87b67a1fa42ea058: Status 404 returned error can't find the container with id bc145bec2ce658abaced9a16504e869993a6dc6d0d39550d87b67a1fa42ea058 Jan 26 11:35:01 crc kubenswrapper[4867]: I0126 11:35:01.700804 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"24fccd97-ac62-4d86-971f-59e4fc780888","Type":"ContainerStarted","Data":"dfdf6e559bd22c5700888e766c1e1cc2ebd9860a0bd6353fcf3d4a9565c286b4"} Jan 26 11:35:01 crc kubenswrapper[4867]: I0126 11:35:01.702759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4f5h4" event={"ID":"211a1bec-4387-4bbf-a034-56dd9396676d","Type":"ContainerStarted","Data":"95901e2adffc05a78e8c6e8735f19130a568dcf1968d78a7c62d4b9999c4a4c9"} Jan 26 11:35:01 crc kubenswrapper[4867]: I0126 11:35:01.704289 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"28f25dc5-093b-4b0a-b1fa-290241e9bccc","Type":"ContainerStarted","Data":"bc145bec2ce658abaced9a16504e869993a6dc6d0d39550d87b67a1fa42ea058"} Jan 26 11:35:05 crc kubenswrapper[4867]: I0126 11:35:05.741781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f08d1721-01b8-4573-8446-18ae794fb9e7","Type":"ContainerStarted","Data":"32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803"} Jan 26 11:35:05 crc kubenswrapper[4867]: I0126 11:35:05.742521 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 11:35:05 crc kubenswrapper[4867]: I0126 11:35:05.748472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb361900-eda0-4cb4-8838-4267b465353b","Type":"ContainerStarted","Data":"02ae5d66039d92f69a9376095ff3174adee09be5749beb6e51ea3a7a0e3c5c19"} Jan 26 11:35:05 crc kubenswrapper[4867]: I0126 11:35:05.749429 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 11:35:05 crc kubenswrapper[4867]: I0126 11:35:05.763979 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.864723598 podStartE2EDuration="24.763956979s" podCreationTimestamp="2026-01-26 11:34:41 +0000 UTC" firstStartedPulling="2026-01-26 11:34:58.689652277 +0000 UTC m=+1048.388227187" lastFinishedPulling="2026-01-26 11:35:04.588885658 +0000 UTC m=+1054.287460568" observedRunningTime="2026-01-26 11:35:05.758594269 +0000 UTC m=+1055.457169179" watchObservedRunningTime="2026-01-26 11:35:05.763956979 +0000 UTC m=+1055.462531889" Jan 26 11:35:05 crc kubenswrapper[4867]: I0126 11:35:05.788889 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.824371026 podStartE2EDuration="26.788861966s" podCreationTimestamp="2026-01-26 11:34:39 +0000 UTC" firstStartedPulling="2026-01-26 11:34:56.724916306 +0000 UTC m=+1046.423491216" lastFinishedPulling="2026-01-26 11:35:03.689407246 +0000 UTC m=+1053.387982156" observedRunningTime="2026-01-26 11:35:05.78292713 +0000 UTC m=+1055.481502030" watchObservedRunningTime="2026-01-26 11:35:05.788861966 +0000 UTC m=+1055.487436876" Jan 26 11:35:10 crc kubenswrapper[4867]: I0126 11:35:10.063428 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.828819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd3b4566-15b8-4c50-bc5e-76c5a6907311","Type":"ContainerStarted","Data":"142e7a36fd0661a259595de1c818da73566edc61c6e6d034858870f719d856b2"} Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.837884 4867 generic.go:334] "Generic (PLEG): container finished" podID="211a1bec-4387-4bbf-a034-56dd9396676d" containerID="82422114eb7d5eb0d7da960ea00f1b571dd9cc05990879d66fb74768e31a1a7f" exitCode=0 Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.838102 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4f5h4" event={"ID":"211a1bec-4387-4bbf-a034-56dd9396676d","Type":"ContainerDied","Data":"82422114eb7d5eb0d7da960ea00f1b571dd9cc05990879d66fb74768e31a1a7f"} Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.866399 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"28f25dc5-093b-4b0a-b1fa-290241e9bccc","Type":"ContainerStarted","Data":"56b8177d73fcd3e273c44e9e6b453ab63fb6fed98054e9ce49c84a06840795f1"} Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.897907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9305cd67-bbb5-45e9-ab35-6a34a717dff8","Type":"ContainerStarted","Data":"05b98fd1a628cbe6ab3d39ec85a5c95efcc799d9d103dd21148ed7dae92764bb"} Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.907522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"24fccd97-ac62-4d86-971f-59e4fc780888","Type":"ContainerStarted","Data":"74789109846c4b0183b6a1eb0de4120bd3d97329cd5e9cb4a0268370aa9e9535"} Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.935673 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hbpxr" event={"ID":"db65f713-855b-4ca7-b989-ebde989474ce","Type":"ContainerStarted","Data":"fc50c7bafda78a13a71a606d2064a4fcc619cbeaa42b707ef99020c4229e0ace"} Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.936631 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hbpxr" Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.991988 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hbpxr" podStartSLOduration=16.260804952 podStartE2EDuration="26.991963128s" podCreationTimestamp="2026-01-26 11:34:45 +0000 UTC" firstStartedPulling="2026-01-26 11:34:59.796553221 +0000 UTC m=+1049.495128131" lastFinishedPulling="2026-01-26 11:35:10.527711407 +0000 UTC m=+1060.226286307" observedRunningTime="2026-01-26 11:35:11.989002515 +0000 UTC m=+1061.687577425" watchObservedRunningTime="2026-01-26 11:35:11.991963128 +0000 UTC m=+1061.690538038" Jan 26 11:35:11 crc kubenswrapper[4867]: I0126 11:35:11.995662 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.097052 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-47hhv"] Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.217799 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-6f5lk"] Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.219516 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.241649 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-6f5lk"] Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.376017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.376337 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb89q\" (UniqueName: \"kubernetes.io/projected/646d1a9e-dc98-477f-853a-15ce192f0b52-kube-api-access-qb89q\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.376459 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-config\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.477592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.478101 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb89q\" (UniqueName: \"kubernetes.io/projected/646d1a9e-dc98-477f-853a-15ce192f0b52-kube-api-access-qb89q\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.478186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-config\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.479432 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-config\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.479833 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.515366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb89q\" (UniqueName: \"kubernetes.io/projected/646d1a9e-dc98-477f-853a-15ce192f0b52-kube-api-access-qb89q\") pod \"dnsmasq-dns-7cb5889db5-6f5lk\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.593397 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:12 crc kubenswrapper[4867]: E0126 11:35:12.905830 4867 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 26 11:35:12 crc kubenswrapper[4867]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e52b8a49-afec-4527-8728-f2b53c33cd94/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:35:12 crc kubenswrapper[4867]: > podSandboxID="9bf3f9558f3c02766a654176f501a7b0a35675ba1cde288cc7d14da0a5f62abe" Jan 26 11:35:12 crc kubenswrapper[4867]: E0126 11:35:12.906851 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 11:35:12 crc kubenswrapper[4867]: init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9b5wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-wg9wq_openstack(e52b8a49-afec-4527-8728-f2b53c33cd94): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e52b8a49-afec-4527-8728-f2b53c33cd94/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:35:12 crc kubenswrapper[4867]: > logger="UnhandledError" Jan 26 11:35:12 crc kubenswrapper[4867]: E0126 11:35:12.908088 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e52b8a49-afec-4527-8728-f2b53c33cd94/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" podUID="e52b8a49-afec-4527-8728-f2b53c33cd94" Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.973037 4867 generic.go:334] "Generic (PLEG): container finished" podID="2af057d6-8429-47ba-9433-2a3ee9ffd26c" containerID="492944fa7d5058b4f03ddcd4fb02677e40f0d11682e183cd75e1e2b4c928b79d" exitCode=0 Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.973304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" event={"ID":"2af057d6-8429-47ba-9433-2a3ee9ffd26c","Type":"ContainerDied","Data":"492944fa7d5058b4f03ddcd4fb02677e40f0d11682e183cd75e1e2b4c928b79d"} Jan 26 11:35:12 crc kubenswrapper[4867]: I0126 11:35:12.992641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4f5h4" event={"ID":"211a1bec-4387-4bbf-a034-56dd9396676d","Type":"ContainerStarted","Data":"e59f8866390aa7851a4f07571b02dc1ddbba216c147243d63ad0947d07ad0b80"} Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.135038 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.141181 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.151080 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qpfrd" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.151385 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.151542 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.151593 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.162501 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.194607 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3f128154-6619-4556-be1b-73e44d4f7df1-cache\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.194663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3f128154-6619-4556-be1b-73e44d4f7df1-lock\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.194704 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.194720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f128154-6619-4556-be1b-73e44d4f7df1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.194744 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.194780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bshj\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-kube-api-access-4bshj\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.253018 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-6f5lk"] Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.296192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3f128154-6619-4556-be1b-73e44d4f7df1-cache\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.296276 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3f128154-6619-4556-be1b-73e44d4f7df1-lock\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.296346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.296372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f128154-6619-4556-be1b-73e44d4f7df1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.296423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.296461 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bshj\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-kube-api-access-4bshj\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: E0126 11:35:13.296781 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:35:13 crc kubenswrapper[4867]: E0126 11:35:13.296813 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:35:13 crc kubenswrapper[4867]: E0126 11:35:13.296880 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift podName:3f128154-6619-4556-be1b-73e44d4f7df1 nodeName:}" failed. No retries permitted until 2026-01-26 11:35:13.79685913 +0000 UTC m=+1063.495434040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift") pod "swift-storage-0" (UID: "3f128154-6619-4556-be1b-73e44d4f7df1") : configmap "swift-ring-files" not found Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.297406 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3f128154-6619-4556-be1b-73e44d4f7df1-lock\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.297676 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.297690 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3f128154-6619-4556-be1b-73e44d4f7df1-cache\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.300506 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f128154-6619-4556-be1b-73e44d4f7df1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.314769 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bshj\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-kube-api-access-4bshj\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.321498 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.321606 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.397361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-config\") pod \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.397603 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-dns-svc\") pod \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.398099 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vk7s\" (UniqueName: \"kubernetes.io/projected/2af057d6-8429-47ba-9433-2a3ee9ffd26c-kube-api-access-8vk7s\") pod \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\" (UID: \"2af057d6-8429-47ba-9433-2a3ee9ffd26c\") " Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.402564 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af057d6-8429-47ba-9433-2a3ee9ffd26c-kube-api-access-8vk7s" (OuterVolumeSpecName: "kube-api-access-8vk7s") pod "2af057d6-8429-47ba-9433-2a3ee9ffd26c" (UID: "2af057d6-8429-47ba-9433-2a3ee9ffd26c"). InnerVolumeSpecName "kube-api-access-8vk7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.421318 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-config" (OuterVolumeSpecName: "config") pod "2af057d6-8429-47ba-9433-2a3ee9ffd26c" (UID: "2af057d6-8429-47ba-9433-2a3ee9ffd26c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.423063 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2af057d6-8429-47ba-9433-2a3ee9ffd26c" (UID: "2af057d6-8429-47ba-9433-2a3ee9ffd26c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.499985 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.500027 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vk7s\" (UniqueName: \"kubernetes.io/projected/2af057d6-8429-47ba-9433-2a3ee9ffd26c-kube-api-access-8vk7s\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.500039 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af057d6-8429-47ba-9433-2a3ee9ffd26c-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:13 crc kubenswrapper[4867]: I0126 11:35:13.806134 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:13 crc kubenswrapper[4867]: E0126 11:35:13.806485 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:35:13 crc kubenswrapper[4867]: E0126 11:35:13.806539 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:35:13 crc kubenswrapper[4867]: E0126 11:35:13.806633 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift podName:3f128154-6619-4556-be1b-73e44d4f7df1 nodeName:}" failed. No retries permitted until 2026-01-26 11:35:14.80660677 +0000 UTC m=+1064.505181680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift") pod "swift-storage-0" (UID: "3f128154-6619-4556-be1b-73e44d4f7df1") : configmap "swift-ring-files" not found Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.006472 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.009802 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-47hhv" event={"ID":"2af057d6-8429-47ba-9433-2a3ee9ffd26c","Type":"ContainerDied","Data":"cdf9c48c41fa88fb83a9563e34a7d19cfa69d5e015340934cff1cc66808dce02"} Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.009929 4867 scope.go:117] "RemoveContainer" containerID="492944fa7d5058b4f03ddcd4fb02677e40f0d11682e183cd75e1e2b4c928b79d" Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.022327 4867 generic.go:334] "Generic (PLEG): container finished" podID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerID="798b331fa8887a16e93ea881b66c42623ad04d0a66d97e46c7679f43e45176ad" exitCode=0 Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.022447 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" event={"ID":"646d1a9e-dc98-477f-853a-15ce192f0b52","Type":"ContainerDied","Data":"798b331fa8887a16e93ea881b66c42623ad04d0a66d97e46c7679f43e45176ad"} Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.022475 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" event={"ID":"646d1a9e-dc98-477f-853a-15ce192f0b52","Type":"ContainerStarted","Data":"596957d660e3a075258728817c34c8442d0a94154f3fd1618a82506d7cbc15a0"} Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.035928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4f5h4" event={"ID":"211a1bec-4387-4bbf-a034-56dd9396676d","Type":"ContainerStarted","Data":"1a9adceb887ed89712700069ce0b2ad40ffddc69493e28d8e678f7ae8ce27688"} Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.035997 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.036096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.134733 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-47hhv"] Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.151744 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-47hhv"] Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.163559 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4f5h4" podStartSLOduration=20.169962014 podStartE2EDuration="29.163529724s" podCreationTimestamp="2026-01-26 11:34:45 +0000 UTC" firstStartedPulling="2026-01-26 11:35:01.488325736 +0000 UTC m=+1051.186900646" lastFinishedPulling="2026-01-26 11:35:10.481893446 +0000 UTC m=+1060.180468356" observedRunningTime="2026-01-26 11:35:14.106654543 +0000 UTC m=+1063.805229453" watchObservedRunningTime="2026-01-26 11:35:14.163529724 +0000 UTC m=+1063.862104654" Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.574306 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af057d6-8429-47ba-9433-2a3ee9ffd26c" path="/var/lib/kubelet/pods/2af057d6-8429-47ba-9433-2a3ee9ffd26c/volumes" Jan 26 11:35:14 crc kubenswrapper[4867]: I0126 11:35:14.832236 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:14 crc kubenswrapper[4867]: E0126 11:35:14.832418 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:35:14 crc kubenswrapper[4867]: E0126 11:35:14.832454 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:35:14 crc kubenswrapper[4867]: E0126 11:35:14.832526 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift podName:3f128154-6619-4556-be1b-73e44d4f7df1 nodeName:}" failed. No retries permitted until 2026-01-26 11:35:16.832507108 +0000 UTC m=+1066.531082018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift") pod "swift-storage-0" (UID: "3f128154-6619-4556-be1b-73e44d4f7df1") : configmap "swift-ring-files" not found Jan 26 11:35:15 crc kubenswrapper[4867]: I0126 11:35:15.044243 4867 generic.go:334] "Generic (PLEG): container finished" podID="9305cd67-bbb5-45e9-ab35-6a34a717dff8" containerID="05b98fd1a628cbe6ab3d39ec85a5c95efcc799d9d103dd21148ed7dae92764bb" exitCode=0 Jan 26 11:35:15 crc kubenswrapper[4867]: I0126 11:35:15.044264 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9305cd67-bbb5-45e9-ab35-6a34a717dff8","Type":"ContainerDied","Data":"05b98fd1a628cbe6ab3d39ec85a5c95efcc799d9d103dd21148ed7dae92764bb"} Jan 26 11:35:15 crc kubenswrapper[4867]: I0126 11:35:15.051121 4867 generic.go:334] "Generic (PLEG): container finished" podID="fd3b4566-15b8-4c50-bc5e-76c5a6907311" containerID="142e7a36fd0661a259595de1c818da73566edc61c6e6d034858870f719d856b2" exitCode=0 Jan 26 11:35:15 crc kubenswrapper[4867]: I0126 11:35:15.051537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd3b4566-15b8-4c50-bc5e-76c5a6907311","Type":"ContainerDied","Data":"142e7a36fd0661a259595de1c818da73566edc61c6e6d034858870f719d856b2"} Jan 26 11:35:16 crc kubenswrapper[4867]: I0126 11:35:16.063903 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"28f25dc5-093b-4b0a-b1fa-290241e9bccc","Type":"ContainerStarted","Data":"557c988cda0cee3a8d00814825e607a226f305c94ded18bba0c00816dab1f303"} Jan 26 11:35:16 crc kubenswrapper[4867]: I0126 11:35:16.067673 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9305cd67-bbb5-45e9-ab35-6a34a717dff8","Type":"ContainerStarted","Data":"367152d2c28a30713b9ac55f8c48a8a38f2f0ef3cfe159142dcaa8caa9d95292"} Jan 26 11:35:16 crc kubenswrapper[4867]: I0126 11:35:16.070393 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"24fccd97-ac62-4d86-971f-59e4fc780888","Type":"ContainerStarted","Data":"13de69470a5597f23d2c253271e79c6bfa305cbfdf1aa783d5f0942bdf5637e1"} Jan 26 11:35:16 crc kubenswrapper[4867]: I0126 11:35:16.073125 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" event={"ID":"646d1a9e-dc98-477f-853a-15ce192f0b52","Type":"ContainerStarted","Data":"66cf900d35fcf2e135453ca05e21771aadb2452f418c1f334ea853a6eeba10cf"} Jan 26 11:35:16 crc kubenswrapper[4867]: I0126 11:35:16.076057 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd3b4566-15b8-4c50-bc5e-76c5a6907311","Type":"ContainerStarted","Data":"c5d17dd784b762050ba999074f03b95756ccf75f2d32b3309671c572fd411711"} Jan 26 11:35:16 crc kubenswrapper[4867]: I0126 11:35:16.111590 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.934415865 podStartE2EDuration="38.111567558s" podCreationTimestamp="2026-01-26 11:34:38 +0000 UTC" firstStartedPulling="2026-01-26 11:34:59.316463951 +0000 UTC m=+1049.015038861" lastFinishedPulling="2026-01-26 11:35:10.493615644 +0000 UTC m=+1060.192190554" observedRunningTime="2026-01-26 11:35:16.105414015 +0000 UTC m=+1065.803988925" watchObservedRunningTime="2026-01-26 11:35:16.111567558 +0000 UTC m=+1065.810142468" Jan 26 11:35:16 crc kubenswrapper[4867]: I0126 11:35:16.865254 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:16 crc kubenswrapper[4867]: E0126 11:35:16.865506 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:35:16 crc kubenswrapper[4867]: E0126 11:35:16.865847 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:35:16 crc kubenswrapper[4867]: E0126 11:35:16.865926 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift podName:3f128154-6619-4556-be1b-73e44d4f7df1 nodeName:}" failed. No retries permitted until 2026-01-26 11:35:20.865905559 +0000 UTC m=+1070.564480469 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift") pod "swift-storage-0" (UID: "3f128154-6619-4556-be1b-73e44d4f7df1") : configmap "swift-ring-files" not found Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.050133 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-s8jqh"] Jan 26 11:35:17 crc kubenswrapper[4867]: E0126 11:35:17.050634 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af057d6-8429-47ba-9433-2a3ee9ffd26c" containerName="init" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.050661 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af057d6-8429-47ba-9433-2a3ee9ffd26c" containerName="init" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.050864 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af057d6-8429-47ba-9433-2a3ee9ffd26c" containerName="init" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.051592 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.053795 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.058331 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.058422 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.063902 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-s8jqh"] Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.070024 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-dispersionconf\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.070097 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-scripts\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.070129 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-combined-ca-bundle\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.070155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-swiftconf\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.070194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-ring-data-devices\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.070317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw7wr\" (UniqueName: \"kubernetes.io/projected/c491453c-4aa8-458a-8ee3-42475e7678f4-kube-api-access-dw7wr\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.070336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c491453c-4aa8-458a-8ee3-42475e7678f4-etc-swift\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.089643 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.132350 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.551566869 podStartE2EDuration="32.132318782s" podCreationTimestamp="2026-01-26 11:34:45 +0000 UTC" firstStartedPulling="2026-01-26 11:35:01.056035773 +0000 UTC m=+1050.754610683" lastFinishedPulling="2026-01-26 11:35:15.636787686 +0000 UTC m=+1065.335362596" observedRunningTime="2026-01-26 11:35:17.109047031 +0000 UTC m=+1066.807621951" watchObservedRunningTime="2026-01-26 11:35:17.132318782 +0000 UTC m=+1066.830893692" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.134797 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.533606096 podStartE2EDuration="29.134790911s" podCreationTimestamp="2026-01-26 11:34:48 +0000 UTC" firstStartedPulling="2026-01-26 11:35:01.045792127 +0000 UTC m=+1050.744367037" lastFinishedPulling="2026-01-26 11:35:15.646976942 +0000 UTC m=+1065.345551852" observedRunningTime="2026-01-26 11:35:17.12903981 +0000 UTC m=+1066.827614750" watchObservedRunningTime="2026-01-26 11:35:17.134790911 +0000 UTC m=+1066.833365821" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.162206 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" podStartSLOduration=5.162179877 podStartE2EDuration="5.162179877s" podCreationTimestamp="2026-01-26 11:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:17.156969852 +0000 UTC m=+1066.855544752" watchObservedRunningTime="2026-01-26 11:35:17.162179877 +0000 UTC m=+1066.860754787" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.171565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw7wr\" (UniqueName: \"kubernetes.io/projected/c491453c-4aa8-458a-8ee3-42475e7678f4-kube-api-access-dw7wr\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.171631 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c491453c-4aa8-458a-8ee3-42475e7678f4-etc-swift\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.171699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-dispersionconf\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.171750 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-combined-ca-bundle\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.171769 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-scripts\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.171786 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-swiftconf\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.171823 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-ring-data-devices\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.172560 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c491453c-4aa8-458a-8ee3-42475e7678f4-etc-swift\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.173135 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-ring-data-devices\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.173593 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-scripts\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.188970 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.576517531 podStartE2EDuration="40.188946896s" podCreationTimestamp="2026-01-26 11:34:37 +0000 UTC" firstStartedPulling="2026-01-26 11:34:59.889125781 +0000 UTC m=+1049.587700701" lastFinishedPulling="2026-01-26 11:35:10.501555156 +0000 UTC m=+1060.200130066" observedRunningTime="2026-01-26 11:35:17.18267425 +0000 UTC m=+1066.881249150" watchObservedRunningTime="2026-01-26 11:35:17.188946896 +0000 UTC m=+1066.887521806" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.190852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-dispersionconf\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.191499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-combined-ca-bundle\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.197790 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-swiftconf\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.199317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw7wr\" (UniqueName: \"kubernetes.io/projected/c491453c-4aa8-458a-8ee3-42475e7678f4-kube-api-access-dw7wr\") pod \"swift-ring-rebalance-s8jqh\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.374826 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:17 crc kubenswrapper[4867]: I0126 11:35:17.816614 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-s8jqh"] Jan 26 11:35:18 crc kubenswrapper[4867]: I0126 11:35:18.098915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s8jqh" event={"ID":"c491453c-4aa8-458a-8ee3-42475e7678f4","Type":"ContainerStarted","Data":"5593d242d53a2b2d9e492b27f8907490c6788cbae28929e5cc4dda4f6cca0e1a"} Jan 26 11:35:18 crc kubenswrapper[4867]: I0126 11:35:18.412443 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 11:35:18 crc kubenswrapper[4867]: I0126 11:35:18.412947 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 11:35:19 crc kubenswrapper[4867]: I0126 11:35:19.559991 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 11:35:19 crc kubenswrapper[4867]: I0126 11:35:19.570446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 11:35:19 crc kubenswrapper[4867]: I0126 11:35:19.570509 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 11:35:19 crc kubenswrapper[4867]: I0126 11:35:19.609135 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 11:35:19 crc kubenswrapper[4867]: I0126 11:35:19.609879 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 11:35:19 crc kubenswrapper[4867]: I0126 11:35:19.686782 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 11:35:19 crc kubenswrapper[4867]: I0126 11:35:19.687600 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.114584 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.170254 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.180523 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.464057 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wg9wq"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.533277 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-4z4hp"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.535844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.540506 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.557273 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-4z4hp"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.671959 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.672162 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-config\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.672192 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hxq\" (UniqueName: \"kubernetes.io/projected/70cb058d-2165-416d-933a-6b4eeabf42fd-kube-api-access-c6hxq\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.672251 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.712984 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wsrcd"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.721806 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.728607 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.761548 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wsrcd"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776164 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515623f1-c4bb-4522-ab0d-00138e1d0d0d-combined-ca-bundle\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-config\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6hxq\" (UniqueName: \"kubernetes.io/projected/70cb058d-2165-416d-933a-6b4eeabf42fd-kube-api-access-c6hxq\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776482 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xz99\" (UniqueName: \"kubernetes.io/projected/515623f1-c4bb-4522-ab0d-00138e1d0d0d-kube-api-access-9xz99\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776548 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/515623f1-c4bb-4522-ab0d-00138e1d0d0d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776613 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515623f1-c4bb-4522-ab0d-00138e1d0d0d-config\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/515623f1-c4bb-4522-ab0d-00138e1d0d0d-ovn-rundir\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.776915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/515623f1-c4bb-4522-ab0d-00138e1d0d0d-ovs-rundir\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.779267 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.779542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.799995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-config\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.829054 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6hxq\" (UniqueName: \"kubernetes.io/projected/70cb058d-2165-416d-933a-6b4eeabf42fd-kube-api-access-c6hxq\") pod \"dnsmasq-dns-8cc7fc4dc-4z4hp\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.865860 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.867426 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.872824 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.873137 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.873275 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5sdsh" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.873390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/515623f1-c4bb-4522-ab0d-00138e1d0d0d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-config\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883409 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-scripts\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515623f1-c4bb-4522-ab0d-00138e1d0d0d-config\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883510 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883526 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvksj\" (UniqueName: \"kubernetes.io/projected/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-kube-api-access-rvksj\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/515623f1-c4bb-4522-ab0d-00138e1d0d0d-ovn-rundir\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/515623f1-c4bb-4522-ab0d-00138e1d0d0d-ovs-rundir\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515623f1-c4bb-4522-ab0d-00138e1d0d0d-combined-ca-bundle\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883653 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883682 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883704 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xz99\" (UniqueName: \"kubernetes.io/projected/515623f1-c4bb-4522-ab0d-00138e1d0d0d-kube-api-access-9xz99\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.883730 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:20 crc kubenswrapper[4867]: E0126 11:35:20.883922 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:35:20 crc kubenswrapper[4867]: E0126 11:35:20.883940 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:35:20 crc kubenswrapper[4867]: E0126 11:35:20.884001 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift podName:3f128154-6619-4556-be1b-73e44d4f7df1 nodeName:}" failed. No retries permitted until 2026-01-26 11:35:28.883978549 +0000 UTC m=+1078.582553459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift") pod "swift-storage-0" (UID: "3f128154-6619-4556-be1b-73e44d4f7df1") : configmap "swift-ring-files" not found Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.888390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/515623f1-c4bb-4522-ab0d-00138e1d0d0d-ovn-rundir\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.889166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515623f1-c4bb-4522-ab0d-00138e1d0d0d-config\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.889269 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/515623f1-c4bb-4522-ab0d-00138e1d0d0d-ovs-rundir\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.892458 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.899726 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/515623f1-c4bb-4522-ab0d-00138e1d0d0d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.900171 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.921085 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515623f1-c4bb-4522-ab0d-00138e1d0d0d-combined-ca-bundle\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.926297 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-6f5lk"] Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.926636 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerName="dnsmasq-dns" containerID="cri-o://66cf900d35fcf2e135453ca05e21771aadb2452f418c1f334ea853a6eeba10cf" gracePeriod=10 Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.972021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xz99\" (UniqueName: \"kubernetes.io/projected/515623f1-c4bb-4522-ab0d-00138e1d0d0d-kube-api-access-9xz99\") pod \"ovn-controller-metrics-wsrcd\" (UID: \"515623f1-c4bb-4522-ab0d-00138e1d0d0d\") " pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.985083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.985130 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvksj\" (UniqueName: \"kubernetes.io/projected/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-kube-api-access-rvksj\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.985205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.985249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.985305 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-config\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.985335 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-scripts\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.985361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.987016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-config\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.987762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.990493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-scripts\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.993048 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.993272 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:20 crc kubenswrapper[4867]: I0126 11:35:20.996247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.012466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvksj\" (UniqueName: \"kubernetes.io/projected/2b5a7e41-130f-46be-8c94-a5ecaf39bb2c-kube-api-access-rvksj\") pod \"ovn-northd-0\" (UID: \"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c\") " pod="openstack/ovn-northd-0" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.022314 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zrbq4"] Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.028875 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.035340 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.059684 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zrbq4"] Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.075424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wsrcd" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.085991 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.086051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-config\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.086080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.086149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.086188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f8kp\" (UniqueName: \"kubernetes.io/projected/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-kube-api-access-4f8kp\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.112439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.131570 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" event={"ID":"646d1a9e-dc98-477f-853a-15ce192f0b52","Type":"ContainerDied","Data":"66cf900d35fcf2e135453ca05e21771aadb2452f418c1f334ea853a6eeba10cf"} Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.131530 4867 generic.go:334] "Generic (PLEG): container finished" podID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerID="66cf900d35fcf2e135453ca05e21771aadb2452f418c1f334ea853a6eeba10cf" exitCode=0 Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.187666 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.188370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f8kp\" (UniqueName: \"kubernetes.io/projected/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-kube-api-access-4f8kp\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.188566 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.188593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.188660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-config\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.188698 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.190515 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.199245 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-config\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.199540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.208552 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f8kp\" (UniqueName: \"kubernetes.io/projected/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-kube-api-access-4f8kp\") pod \"dnsmasq-dns-b8fbc5445-zrbq4\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:21 crc kubenswrapper[4867]: I0126 11:35:21.425004 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:22 crc kubenswrapper[4867]: I0126 11:35:22.248204 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 11:35:22 crc kubenswrapper[4867]: I0126 11:35:22.333840 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.404200 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.412585 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.537623 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb89q\" (UniqueName: \"kubernetes.io/projected/646d1a9e-dc98-477f-853a-15ce192f0b52-kube-api-access-qb89q\") pod \"646d1a9e-dc98-477f-853a-15ce192f0b52\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.537737 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b5wt\" (UniqueName: \"kubernetes.io/projected/e52b8a49-afec-4527-8728-f2b53c33cd94-kube-api-access-9b5wt\") pod \"e52b8a49-afec-4527-8728-f2b53c33cd94\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.537835 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-config\") pod \"646d1a9e-dc98-477f-853a-15ce192f0b52\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.537875 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-config\") pod \"e52b8a49-afec-4527-8728-f2b53c33cd94\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.538001 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-dns-svc\") pod \"e52b8a49-afec-4527-8728-f2b53c33cd94\" (UID: \"e52b8a49-afec-4527-8728-f2b53c33cd94\") " Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.538032 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-dns-svc\") pod \"646d1a9e-dc98-477f-853a-15ce192f0b52\" (UID: \"646d1a9e-dc98-477f-853a-15ce192f0b52\") " Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.544942 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646d1a9e-dc98-477f-853a-15ce192f0b52-kube-api-access-qb89q" (OuterVolumeSpecName: "kube-api-access-qb89q") pod "646d1a9e-dc98-477f-853a-15ce192f0b52" (UID: "646d1a9e-dc98-477f-853a-15ce192f0b52"). InnerVolumeSpecName "kube-api-access-qb89q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.545018 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52b8a49-afec-4527-8728-f2b53c33cd94-kube-api-access-9b5wt" (OuterVolumeSpecName: "kube-api-access-9b5wt") pod "e52b8a49-afec-4527-8728-f2b53c33cd94" (UID: "e52b8a49-afec-4527-8728-f2b53c33cd94"). InnerVolumeSpecName "kube-api-access-9b5wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.589641 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-config" (OuterVolumeSpecName: "config") pod "e52b8a49-afec-4527-8728-f2b53c33cd94" (UID: "e52b8a49-afec-4527-8728-f2b53c33cd94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.590136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e52b8a49-afec-4527-8728-f2b53c33cd94" (UID: "e52b8a49-afec-4527-8728-f2b53c33cd94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.615363 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-config" (OuterVolumeSpecName: "config") pod "646d1a9e-dc98-477f-853a-15ce192f0b52" (UID: "646d1a9e-dc98-477f-853a-15ce192f0b52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.616712 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "646d1a9e-dc98-477f-853a-15ce192f0b52" (UID: "646d1a9e-dc98-477f-853a-15ce192f0b52"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.641556 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b5wt\" (UniqueName: \"kubernetes.io/projected/e52b8a49-afec-4527-8728-f2b53c33cd94-kube-api-access-9b5wt\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.641591 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.641609 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.641618 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e52b8a49-afec-4527-8728-f2b53c33cd94-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.641628 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/646d1a9e-dc98-477f-853a-15ce192f0b52-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.641637 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb89q\" (UniqueName: \"kubernetes.io/projected/646d1a9e-dc98-477f-853a-15ce192f0b52-kube-api-access-qb89q\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.778903 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zrbq4"] Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.791359 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wsrcd"] Jan 26 11:35:23 crc kubenswrapper[4867]: W0126 11:35:23.793654 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5a7e41_130f_46be_8c94_a5ecaf39bb2c.slice/crio-a2e6faf7a22d58355cc305527c2bb48c805b84e0a08a60952bd4b84b916c88fb WatchSource:0}: Error finding container a2e6faf7a22d58355cc305527c2bb48c805b84e0a08a60952bd4b84b916c88fb: Status 404 returned error can't find the container with id a2e6faf7a22d58355cc305527c2bb48c805b84e0a08a60952bd4b84b916c88fb Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.798944 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 11:35:23 crc kubenswrapper[4867]: I0126 11:35:23.962950 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-4z4hp"] Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.159117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" event={"ID":"70cb058d-2165-416d-933a-6b4eeabf42fd","Type":"ContainerStarted","Data":"65627d7f3b884a577802af3fb661e51390f1f2d5929609ad836d6f358ffef9ab"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.160370 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s8jqh" event={"ID":"c491453c-4aa8-458a-8ee3-42475e7678f4","Type":"ContainerStarted","Data":"0b8a22863ccea531a3bb13cd37da122819fc47d06950bba2120f93f63600c55e"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.164338 4867 generic.go:334] "Generic (PLEG): container finished" podID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerID="728cd5335638a80100234d8d588428eb897f11a8d6605fb408a28f3d61d15d8d" exitCode=0 Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.164441 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" event={"ID":"f69c8f7d-7b7b-476a-989a-aee2eec1e5db","Type":"ContainerDied","Data":"728cd5335638a80100234d8d588428eb897f11a8d6605fb408a28f3d61d15d8d"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.164485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" event={"ID":"f69c8f7d-7b7b-476a-989a-aee2eec1e5db","Type":"ContainerStarted","Data":"91e57ae24f98e42889eda8b19a55e84dfed1ca601218fc3fa83c2c10cf0ccbdc"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.167019 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" event={"ID":"e52b8a49-afec-4527-8728-f2b53c33cd94","Type":"ContainerDied","Data":"9bf3f9558f3c02766a654176f501a7b0a35675ba1cde288cc7d14da0a5f62abe"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.167062 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wg9wq" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.168753 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c","Type":"ContainerStarted","Data":"a2e6faf7a22d58355cc305527c2bb48c805b84e0a08a60952bd4b84b916c88fb"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.171744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" event={"ID":"646d1a9e-dc98-477f-853a-15ce192f0b52","Type":"ContainerDied","Data":"596957d660e3a075258728817c34c8442d0a94154f3fd1618a82506d7cbc15a0"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.171829 4867 scope.go:117] "RemoveContainer" containerID="66cf900d35fcf2e135453ca05e21771aadb2452f418c1f334ea853a6eeba10cf" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.172062 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.175118 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wsrcd" event={"ID":"515623f1-c4bb-4522-ab0d-00138e1d0d0d","Type":"ContainerStarted","Data":"28664ee0c39a539c9c5778de27a95f759c4a59631b158d343388f8347c0050a3"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.175143 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wsrcd" event={"ID":"515623f1-c4bb-4522-ab0d-00138e1d0d0d","Type":"ContainerStarted","Data":"244d0dde5c1b2d70fa622e1a2ca25fa137f2795a98baa3b30b2820aa8aa4d3c6"} Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.193628 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-s8jqh" podStartSLOduration=1.728582064 podStartE2EDuration="7.193598741s" podCreationTimestamp="2026-01-26 11:35:17 +0000 UTC" firstStartedPulling="2026-01-26 11:35:17.824103513 +0000 UTC m=+1067.522678423" lastFinishedPulling="2026-01-26 11:35:23.28912019 +0000 UTC m=+1072.987695100" observedRunningTime="2026-01-26 11:35:24.184258969 +0000 UTC m=+1073.882833909" watchObservedRunningTime="2026-01-26 11:35:24.193598741 +0000 UTC m=+1073.892173651" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.214533 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wsrcd" podStartSLOduration=4.214502275 podStartE2EDuration="4.214502275s" podCreationTimestamp="2026-01-26 11:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:24.209733072 +0000 UTC m=+1073.908307982" watchObservedRunningTime="2026-01-26 11:35:24.214502275 +0000 UTC m=+1073.913077185" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.386145 4867 scope.go:117] "RemoveContainer" containerID="798b331fa8887a16e93ea881b66c42623ad04d0a66d97e46c7679f43e45176ad" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.442784 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-6f5lk"] Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.472456 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-6f5lk"] Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.486838 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wg9wq"] Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.492138 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wg9wq"] Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.574395 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" path="/var/lib/kubelet/pods/646d1a9e-dc98-477f-853a-15ce192f0b52/volumes" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.575779 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52b8a49-afec-4527-8728-f2b53c33cd94" path="/var/lib/kubelet/pods/e52b8a49-afec-4527-8728-f2b53c33cd94/volumes" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.644980 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 11:35:24 crc kubenswrapper[4867]: I0126 11:35:24.733286 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 11:35:25 crc kubenswrapper[4867]: I0126 11:35:25.190847 4867 generic.go:334] "Generic (PLEG): container finished" podID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerID="0312b2ee959eaeda7064c6618860ab72b58713089d8fbcd2480d377a4878c5f5" exitCode=0 Jan 26 11:35:25 crc kubenswrapper[4867]: I0126 11:35:25.190916 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" event={"ID":"70cb058d-2165-416d-933a-6b4eeabf42fd","Type":"ContainerDied","Data":"0312b2ee959eaeda7064c6618860ab72b58713089d8fbcd2480d377a4878c5f5"} Jan 26 11:35:25 crc kubenswrapper[4867]: I0126 11:35:25.198822 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" event={"ID":"f69c8f7d-7b7b-476a-989a-aee2eec1e5db","Type":"ContainerStarted","Data":"3711234edf94027b98b9dfb5883f4924a14576bc7014ce66e5b4cfeacbe70b4d"} Jan 26 11:35:25 crc kubenswrapper[4867]: I0126 11:35:25.199144 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:25 crc kubenswrapper[4867]: I0126 11:35:25.245705 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" podStartSLOduration=5.245672131 podStartE2EDuration="5.245672131s" podCreationTimestamp="2026-01-26 11:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:25.232566845 +0000 UTC m=+1074.931141755" watchObservedRunningTime="2026-01-26 11:35:25.245672131 +0000 UTC m=+1074.944247041" Jan 26 11:35:26 crc kubenswrapper[4867]: I0126 11:35:26.209955 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" event={"ID":"70cb058d-2165-416d-933a-6b4eeabf42fd","Type":"ContainerStarted","Data":"00dc51d8974591c2fd4c381997a0123e34091de59dc8c82f4f251fa76faf00cc"} Jan 26 11:35:26 crc kubenswrapper[4867]: I0126 11:35:26.210615 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:26 crc kubenswrapper[4867]: I0126 11:35:26.212345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c","Type":"ContainerStarted","Data":"b85724f23f4731a94020aa72afe692f634e269b1339ab7c4272104b334ce595d"} Jan 26 11:35:26 crc kubenswrapper[4867]: I0126 11:35:26.212365 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2b5a7e41-130f-46be-8c94-a5ecaf39bb2c","Type":"ContainerStarted","Data":"4a6d899d0e204ff5dec365bed076c2c2ec7352522eb9c1a15405bd194fad5935"} Jan 26 11:35:26 crc kubenswrapper[4867]: I0126 11:35:26.212825 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 11:35:26 crc kubenswrapper[4867]: I0126 11:35:26.233944 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" podStartSLOduration=6.233916006 podStartE2EDuration="6.233916006s" podCreationTimestamp="2026-01-26 11:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:26.229950935 +0000 UTC m=+1075.928525845" watchObservedRunningTime="2026-01-26 11:35:26.233916006 +0000 UTC m=+1075.932490916" Jan 26 11:35:26 crc kubenswrapper[4867]: I0126 11:35:26.247792 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.000808022 podStartE2EDuration="6.247767813s" podCreationTimestamp="2026-01-26 11:35:20 +0000 UTC" firstStartedPulling="2026-01-26 11:35:23.798387216 +0000 UTC m=+1073.496962126" lastFinishedPulling="2026-01-26 11:35:25.045347007 +0000 UTC m=+1074.743921917" observedRunningTime="2026-01-26 11:35:26.247442964 +0000 UTC m=+1075.946017884" watchObservedRunningTime="2026-01-26 11:35:26.247767813 +0000 UTC m=+1075.946342723" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.096814 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rk4v5"] Jan 26 11:35:27 crc kubenswrapper[4867]: E0126 11:35:27.097463 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerName="dnsmasq-dns" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.097487 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerName="dnsmasq-dns" Jan 26 11:35:27 crc kubenswrapper[4867]: E0126 11:35:27.097528 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerName="init" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.097535 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerName="init" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.097729 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerName="dnsmasq-dns" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.098404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.100484 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.103651 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rk4v5"] Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.240010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/190b3224-57c6-42d8-8ab0-e026065ff44c-operator-scripts\") pod \"root-account-create-update-rk4v5\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.240101 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckn4\" (UniqueName: \"kubernetes.io/projected/190b3224-57c6-42d8-8ab0-e026065ff44c-kube-api-access-mckn4\") pod \"root-account-create-update-rk4v5\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.341659 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/190b3224-57c6-42d8-8ab0-e026065ff44c-operator-scripts\") pod \"root-account-create-update-rk4v5\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.341793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckn4\" (UniqueName: \"kubernetes.io/projected/190b3224-57c6-42d8-8ab0-e026065ff44c-kube-api-access-mckn4\") pod \"root-account-create-update-rk4v5\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.343017 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/190b3224-57c6-42d8-8ab0-e026065ff44c-operator-scripts\") pod \"root-account-create-update-rk4v5\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.368477 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckn4\" (UniqueName: \"kubernetes.io/projected/190b3224-57c6-42d8-8ab0-e026065ff44c-kube-api-access-mckn4\") pod \"root-account-create-update-rk4v5\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.439145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.595185 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-6f5lk" podUID="646d1a9e-dc98-477f-853a-15ce192f0b52" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Jan 26 11:35:27 crc kubenswrapper[4867]: I0126 11:35:27.871133 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rk4v5"] Jan 26 11:35:27 crc kubenswrapper[4867]: W0126 11:35:27.871130 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod190b3224_57c6_42d8_8ab0_e026065ff44c.slice/crio-d851a02d9cede75e439a22e7093a7140006970a929add0991d218c3e4329d46f WatchSource:0}: Error finding container d851a02d9cede75e439a22e7093a7140006970a929add0991d218c3e4329d46f: Status 404 returned error can't find the container with id d851a02d9cede75e439a22e7093a7140006970a929add0991d218c3e4329d46f Jan 26 11:35:28 crc kubenswrapper[4867]: I0126 11:35:28.230646 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rk4v5" event={"ID":"190b3224-57c6-42d8-8ab0-e026065ff44c","Type":"ContainerStarted","Data":"d851a02d9cede75e439a22e7093a7140006970a929add0991d218c3e4329d46f"} Jan 26 11:35:28 crc kubenswrapper[4867]: I0126 11:35:28.973496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:28 crc kubenswrapper[4867]: E0126 11:35:28.973798 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 11:35:28 crc kubenswrapper[4867]: E0126 11:35:28.973850 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 11:35:28 crc kubenswrapper[4867]: E0126 11:35:28.973916 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift podName:3f128154-6619-4556-be1b-73e44d4f7df1 nodeName:}" failed. No retries permitted until 2026-01-26 11:35:44.973895723 +0000 UTC m=+1094.672470633 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift") pod "swift-storage-0" (UID: "3f128154-6619-4556-be1b-73e44d4f7df1") : configmap "swift-ring-files" not found Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.622302 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bs7ks"] Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.624307 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.631679 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bs7ks"] Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.731921 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-14ec-account-create-update-2wrkd"] Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.737609 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.741376 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-14ec-account-create-update-2wrkd"] Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.742305 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.787562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee2993e-e4e2-4fda-8506-4af3ea92108f-operator-scripts\") pod \"keystone-db-create-bs7ks\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.787635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdv6n\" (UniqueName: \"kubernetes.io/projected/6ee2993e-e4e2-4fda-8506-4af3ea92108f-kube-api-access-cdv6n\") pod \"keystone-db-create-bs7ks\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.889721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee2993e-e4e2-4fda-8506-4af3ea92108f-operator-scripts\") pod \"keystone-db-create-bs7ks\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.889795 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdv6n\" (UniqueName: \"kubernetes.io/projected/6ee2993e-e4e2-4fda-8506-4af3ea92108f-kube-api-access-cdv6n\") pod \"keystone-db-create-bs7ks\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.889888 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462n5\" (UniqueName: \"kubernetes.io/projected/ec4f0ae5-3541-4224-8693-6264be64156e-kube-api-access-462n5\") pod \"keystone-14ec-account-create-update-2wrkd\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.889924 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f0ae5-3541-4224-8693-6264be64156e-operator-scripts\") pod \"keystone-14ec-account-create-update-2wrkd\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.890993 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee2993e-e4e2-4fda-8506-4af3ea92108f-operator-scripts\") pod \"keystone-db-create-bs7ks\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.915780 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdv6n\" (UniqueName: \"kubernetes.io/projected/6ee2993e-e4e2-4fda-8506-4af3ea92108f-kube-api-access-cdv6n\") pod \"keystone-db-create-bs7ks\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.946584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2xl9x"] Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.947780 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.957397 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.959318 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2xl9x"] Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.992633 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462n5\" (UniqueName: \"kubernetes.io/projected/ec4f0ae5-3541-4224-8693-6264be64156e-kube-api-access-462n5\") pod \"keystone-14ec-account-create-update-2wrkd\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.992764 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f0ae5-3541-4224-8693-6264be64156e-operator-scripts\") pod \"keystone-14ec-account-create-update-2wrkd\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:29 crc kubenswrapper[4867]: I0126 11:35:29.993946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f0ae5-3541-4224-8693-6264be64156e-operator-scripts\") pod \"keystone-14ec-account-create-update-2wrkd\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.017825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462n5\" (UniqueName: \"kubernetes.io/projected/ec4f0ae5-3541-4224-8693-6264be64156e-kube-api-access-462n5\") pod \"keystone-14ec-account-create-update-2wrkd\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.071589 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.093635 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d8fb-account-create-update-fpsgc"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.094815 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shm56\" (UniqueName: \"kubernetes.io/projected/ede5a15e-c616-482a-8f65-dcc40b72bac9-kube-api-access-shm56\") pod \"placement-db-create-2xl9x\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.095005 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede5a15e-c616-482a-8f65-dcc40b72bac9-operator-scripts\") pod \"placement-db-create-2xl9x\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.095765 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.103803 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.107942 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d8fb-account-create-update-fpsgc"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.196381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede5a15e-c616-482a-8f65-dcc40b72bac9-operator-scripts\") pod \"placement-db-create-2xl9x\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.196446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwvx\" (UniqueName: \"kubernetes.io/projected/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-kube-api-access-klwvx\") pod \"placement-d8fb-account-create-update-fpsgc\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.196499 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shm56\" (UniqueName: \"kubernetes.io/projected/ede5a15e-c616-482a-8f65-dcc40b72bac9-kube-api-access-shm56\") pod \"placement-db-create-2xl9x\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.196531 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-operator-scripts\") pod \"placement-d8fb-account-create-update-fpsgc\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.197524 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede5a15e-c616-482a-8f65-dcc40b72bac9-operator-scripts\") pod \"placement-db-create-2xl9x\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.214937 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shm56\" (UniqueName: \"kubernetes.io/projected/ede5a15e-c616-482a-8f65-dcc40b72bac9-kube-api-access-shm56\") pod \"placement-db-create-2xl9x\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.256507 4867 generic.go:334] "Generic (PLEG): container finished" podID="190b3224-57c6-42d8-8ab0-e026065ff44c" containerID="bee0e58e7762c264210bc440513c9fb59e5720253688e63c3676620a5247488f" exitCode=0 Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.256562 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rk4v5" event={"ID":"190b3224-57c6-42d8-8ab0-e026065ff44c","Type":"ContainerDied","Data":"bee0e58e7762c264210bc440513c9fb59e5720253688e63c3676620a5247488f"} Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.257893 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-z9jck"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.262762 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.268470 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z9jck"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.298598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klwvx\" (UniqueName: \"kubernetes.io/projected/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-kube-api-access-klwvx\") pod \"placement-d8fb-account-create-update-fpsgc\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.298688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-operator-scripts\") pod \"placement-d8fb-account-create-update-fpsgc\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.299473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-operator-scripts\") pod \"placement-d8fb-account-create-update-fpsgc\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.315076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klwvx\" (UniqueName: \"kubernetes.io/projected/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-kube-api-access-klwvx\") pod \"placement-d8fb-account-create-update-fpsgc\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.380586 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-51a0-account-create-update-lcjf9"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.381909 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.384620 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.392925 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-51a0-account-create-update-lcjf9"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.400180 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c2ed7-4485-455b-bba2-42014178d9be-operator-scripts\") pod \"glance-db-create-z9jck\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.400294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbr9j\" (UniqueName: \"kubernetes.io/projected/c90c2ed7-4485-455b-bba2-42014178d9be-kube-api-access-xbr9j\") pod \"glance-db-create-z9jck\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.419628 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.460795 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.483536 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bs7ks"] Jan 26 11:35:30 crc kubenswrapper[4867]: W0126 11:35:30.491660 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ee2993e_e4e2_4fda_8506_4af3ea92108f.slice/crio-b2dbcbf29cbe73e2b82a4873c2e1c28b6a9e8060740516378a91212a5dc19574 WatchSource:0}: Error finding container b2dbcbf29cbe73e2b82a4873c2e1c28b6a9e8060740516378a91212a5dc19574: Status 404 returned error can't find the container with id b2dbcbf29cbe73e2b82a4873c2e1c28b6a9e8060740516378a91212a5dc19574 Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.507966 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fk59\" (UniqueName: \"kubernetes.io/projected/4ad2b2c0-428a-4a2b-943d-91966c6f7403-kube-api-access-7fk59\") pod \"glance-51a0-account-create-update-lcjf9\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.508077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad2b2c0-428a-4a2b-943d-91966c6f7403-operator-scripts\") pod \"glance-51a0-account-create-update-lcjf9\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.508147 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c2ed7-4485-455b-bba2-42014178d9be-operator-scripts\") pod \"glance-db-create-z9jck\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.508204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbr9j\" (UniqueName: \"kubernetes.io/projected/c90c2ed7-4485-455b-bba2-42014178d9be-kube-api-access-xbr9j\") pod \"glance-db-create-z9jck\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.509440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c2ed7-4485-455b-bba2-42014178d9be-operator-scripts\") pod \"glance-db-create-z9jck\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.536916 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbr9j\" (UniqueName: \"kubernetes.io/projected/c90c2ed7-4485-455b-bba2-42014178d9be-kube-api-access-xbr9j\") pod \"glance-db-create-z9jck\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.580776 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z9jck" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.609533 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad2b2c0-428a-4a2b-943d-91966c6f7403-operator-scripts\") pod \"glance-51a0-account-create-update-lcjf9\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.609662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fk59\" (UniqueName: \"kubernetes.io/projected/4ad2b2c0-428a-4a2b-943d-91966c6f7403-kube-api-access-7fk59\") pod \"glance-51a0-account-create-update-lcjf9\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.610670 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad2b2c0-428a-4a2b-943d-91966c6f7403-operator-scripts\") pod \"glance-51a0-account-create-update-lcjf9\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.627684 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-14ec-account-create-update-2wrkd"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.635305 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fk59\" (UniqueName: \"kubernetes.io/projected/4ad2b2c0-428a-4a2b-943d-91966c6f7403-kube-api-access-7fk59\") pod \"glance-51a0-account-create-update-lcjf9\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.678880 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.707334 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.889729 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2xl9x"] Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.902485 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:30 crc kubenswrapper[4867]: I0126 11:35:30.986247 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d8fb-account-create-update-fpsgc"] Jan 26 11:35:31 crc kubenswrapper[4867]: W0126 11:35:31.032936 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0055f8a_079d_477c_9dab_f6e66fc7e0a0.slice/crio-0aab521bb1fc3b849bdb0d5b9f8e97c1ef8456e3cf94401b68aa1ced01e438f6 WatchSource:0}: Error finding container 0aab521bb1fc3b849bdb0d5b9f8e97c1ef8456e3cf94401b68aa1ced01e438f6: Status 404 returned error can't find the container with id 0aab521bb1fc3b849bdb0d5b9f8e97c1ef8456e3cf94401b68aa1ced01e438f6 Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.045544 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.093031 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z9jck"] Jan 26 11:35:31 crc kubenswrapper[4867]: W0126 11:35:31.094807 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc90c2ed7_4485_455b_bba2_42014178d9be.slice/crio-42b87d218d873c48e675fe68fb0b85d6a31ab506a130f966e0474807d57c487d WatchSource:0}: Error finding container 42b87d218d873c48e675fe68fb0b85d6a31ab506a130f966e0474807d57c487d: Status 404 returned error can't find the container with id 42b87d218d873c48e675fe68fb0b85d6a31ab506a130f966e0474807d57c487d Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.215245 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-51a0-account-create-update-lcjf9"] Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.271747 4867 generic.go:334] "Generic (PLEG): container finished" podID="ec4f0ae5-3541-4224-8693-6264be64156e" containerID="3548d75bb02b2a13831b8d71faf98a958d9495ba26eb852b0f7a6b17f6e7b2b8" exitCode=0 Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.272070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-14ec-account-create-update-2wrkd" event={"ID":"ec4f0ae5-3541-4224-8693-6264be64156e","Type":"ContainerDied","Data":"3548d75bb02b2a13831b8d71faf98a958d9495ba26eb852b0f7a6b17f6e7b2b8"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.272217 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-14ec-account-create-update-2wrkd" event={"ID":"ec4f0ae5-3541-4224-8693-6264be64156e","Type":"ContainerStarted","Data":"98221a13cb412b52fcd2cf7978f09a4a07dca23bfae5cff2639b67731df0e2dc"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.276373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z9jck" event={"ID":"c90c2ed7-4485-455b-bba2-42014178d9be","Type":"ContainerStarted","Data":"42b87d218d873c48e675fe68fb0b85d6a31ab506a130f966e0474807d57c487d"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.277895 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.279213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2xl9x" event={"ID":"ede5a15e-c616-482a-8f65-dcc40b72bac9","Type":"ContainerStarted","Data":"1e81ff7533ca607742db210aec7eb45b8e33e5cab8356d9d345ebd5169122d0d"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.279267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2xl9x" event={"ID":"ede5a15e-c616-482a-8f65-dcc40b72bac9","Type":"ContainerStarted","Data":"a7951ea48fd44934c0eb54a182a2a480a40def4309120eb0ef524e2e038223d2"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.283662 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d8fb-account-create-update-fpsgc" event={"ID":"f0055f8a-079d-477c-9dab-f6e66fc7e0a0","Type":"ContainerStarted","Data":"0aab521bb1fc3b849bdb0d5b9f8e97c1ef8456e3cf94401b68aa1ced01e438f6"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.285581 4867 generic.go:334] "Generic (PLEG): container finished" podID="6ee2993e-e4e2-4fda-8506-4af3ea92108f" containerID="b19d29259d1895082d0636b1e7ad3f5bdd994ce4b61afc26719e6512963cf847" exitCode=0 Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.285766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bs7ks" event={"ID":"6ee2993e-e4e2-4fda-8506-4af3ea92108f","Type":"ContainerDied","Data":"b19d29259d1895082d0636b1e7ad3f5bdd994ce4b61afc26719e6512963cf847"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.285783 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bs7ks" event={"ID":"6ee2993e-e4e2-4fda-8506-4af3ea92108f","Type":"ContainerStarted","Data":"b2dbcbf29cbe73e2b82a4873c2e1c28b6a9e8060740516378a91212a5dc19574"} Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.330602 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-2xl9x" podStartSLOduration=2.330574087 podStartE2EDuration="2.330574087s" podCreationTimestamp="2026-01-26 11:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:31.318657804 +0000 UTC m=+1081.017232714" watchObservedRunningTime="2026-01-26 11:35:31.330574087 +0000 UTC m=+1081.029149007" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.433759 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.526028 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-4z4hp"] Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.526358 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" podUID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerName="dnsmasq-dns" containerID="cri-o://00dc51d8974591c2fd4c381997a0123e34091de59dc8c82f4f251fa76faf00cc" gracePeriod=10 Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.677614 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.868503 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckn4\" (UniqueName: \"kubernetes.io/projected/190b3224-57c6-42d8-8ab0-e026065ff44c-kube-api-access-mckn4\") pod \"190b3224-57c6-42d8-8ab0-e026065ff44c\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.868700 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/190b3224-57c6-42d8-8ab0-e026065ff44c-operator-scripts\") pod \"190b3224-57c6-42d8-8ab0-e026065ff44c\" (UID: \"190b3224-57c6-42d8-8ab0-e026065ff44c\") " Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.869624 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190b3224-57c6-42d8-8ab0-e026065ff44c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "190b3224-57c6-42d8-8ab0-e026065ff44c" (UID: "190b3224-57c6-42d8-8ab0-e026065ff44c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.876821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190b3224-57c6-42d8-8ab0-e026065ff44c-kube-api-access-mckn4" (OuterVolumeSpecName: "kube-api-access-mckn4") pod "190b3224-57c6-42d8-8ab0-e026065ff44c" (UID: "190b3224-57c6-42d8-8ab0-e026065ff44c"). InnerVolumeSpecName "kube-api-access-mckn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.974827 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/190b3224-57c6-42d8-8ab0-e026065ff44c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:31 crc kubenswrapper[4867]: I0126 11:35:31.974873 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckn4\" (UniqueName: \"kubernetes.io/projected/190b3224-57c6-42d8-8ab0-e026065ff44c-kube-api-access-mckn4\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.302905 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rk4v5" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.303820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rk4v5" event={"ID":"190b3224-57c6-42d8-8ab0-e026065ff44c","Type":"ContainerDied","Data":"d851a02d9cede75e439a22e7093a7140006970a929add0991d218c3e4329d46f"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.303859 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d851a02d9cede75e439a22e7093a7140006970a929add0991d218c3e4329d46f" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.305582 4867 generic.go:334] "Generic (PLEG): container finished" podID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerID="00dc51d8974591c2fd4c381997a0123e34091de59dc8c82f4f251fa76faf00cc" exitCode=0 Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.305630 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" event={"ID":"70cb058d-2165-416d-933a-6b4eeabf42fd","Type":"ContainerDied","Data":"00dc51d8974591c2fd4c381997a0123e34091de59dc8c82f4f251fa76faf00cc"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.305647 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" event={"ID":"70cb058d-2165-416d-933a-6b4eeabf42fd","Type":"ContainerDied","Data":"65627d7f3b884a577802af3fb661e51390f1f2d5929609ad836d6f358ffef9ab"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.305657 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65627d7f3b884a577802af3fb661e51390f1f2d5929609ad836d6f358ffef9ab" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.306644 4867 generic.go:334] "Generic (PLEG): container finished" podID="ede5a15e-c616-482a-8f65-dcc40b72bac9" containerID="1e81ff7533ca607742db210aec7eb45b8e33e5cab8356d9d345ebd5169122d0d" exitCode=0 Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.306679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2xl9x" event={"ID":"ede5a15e-c616-482a-8f65-dcc40b72bac9","Type":"ContainerDied","Data":"1e81ff7533ca607742db210aec7eb45b8e33e5cab8356d9d345ebd5169122d0d"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.307959 4867 generic.go:334] "Generic (PLEG): container finished" podID="4ad2b2c0-428a-4a2b-943d-91966c6f7403" containerID="c588748677466f817d168dd03f898c35a8f725264c35c933aeaf9f4a99c81581" exitCode=0 Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.308000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-51a0-account-create-update-lcjf9" event={"ID":"4ad2b2c0-428a-4a2b-943d-91966c6f7403","Type":"ContainerDied","Data":"c588748677466f817d168dd03f898c35a8f725264c35c933aeaf9f4a99c81581"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.308018 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-51a0-account-create-update-lcjf9" event={"ID":"4ad2b2c0-428a-4a2b-943d-91966c6f7403","Type":"ContainerStarted","Data":"f857fc9610e5dce1a288a29dc059555a74a0d8c8f9db6d11ebb884f847974ba0"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.309900 4867 generic.go:334] "Generic (PLEG): container finished" podID="f0055f8a-079d-477c-9dab-f6e66fc7e0a0" containerID="c1ce25549a1890d533f5f84e2e14e250cf17bbb49b24069efc482c99cf8a8848" exitCode=0 Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.309985 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d8fb-account-create-update-fpsgc" event={"ID":"f0055f8a-079d-477c-9dab-f6e66fc7e0a0","Type":"ContainerDied","Data":"c1ce25549a1890d533f5f84e2e14e250cf17bbb49b24069efc482c99cf8a8848"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.313096 4867 generic.go:334] "Generic (PLEG): container finished" podID="c90c2ed7-4485-455b-bba2-42014178d9be" containerID="d125445cf27017f0fa2ac1f123eba9569454b4081f0bfeffaadf5b7ac071388a" exitCode=0 Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.313498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z9jck" event={"ID":"c90c2ed7-4485-455b-bba2-42014178d9be","Type":"ContainerDied","Data":"d125445cf27017f0fa2ac1f123eba9569454b4081f0bfeffaadf5b7ac071388a"} Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.330416 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.486055 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-config\") pod \"70cb058d-2165-416d-933a-6b4eeabf42fd\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.486144 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-ovsdbserver-sb\") pod \"70cb058d-2165-416d-933a-6b4eeabf42fd\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.486201 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6hxq\" (UniqueName: \"kubernetes.io/projected/70cb058d-2165-416d-933a-6b4eeabf42fd-kube-api-access-c6hxq\") pod \"70cb058d-2165-416d-933a-6b4eeabf42fd\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.486282 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-dns-svc\") pod \"70cb058d-2165-416d-933a-6b4eeabf42fd\" (UID: \"70cb058d-2165-416d-933a-6b4eeabf42fd\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.490503 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70cb058d-2165-416d-933a-6b4eeabf42fd-kube-api-access-c6hxq" (OuterVolumeSpecName: "kube-api-access-c6hxq") pod "70cb058d-2165-416d-933a-6b4eeabf42fd" (UID: "70cb058d-2165-416d-933a-6b4eeabf42fd"). InnerVolumeSpecName "kube-api-access-c6hxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.533154 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-config" (OuterVolumeSpecName: "config") pod "70cb058d-2165-416d-933a-6b4eeabf42fd" (UID: "70cb058d-2165-416d-933a-6b4eeabf42fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.553284 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70cb058d-2165-416d-933a-6b4eeabf42fd" (UID: "70cb058d-2165-416d-933a-6b4eeabf42fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.572717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70cb058d-2165-416d-933a-6b4eeabf42fd" (UID: "70cb058d-2165-416d-933a-6b4eeabf42fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.588895 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6hxq\" (UniqueName: \"kubernetes.io/projected/70cb058d-2165-416d-933a-6b4eeabf42fd-kube-api-access-c6hxq\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.588963 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.589132 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.589155 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70cb058d-2165-416d-933a-6b4eeabf42fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.709385 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.816646 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.896662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee2993e-e4e2-4fda-8506-4af3ea92108f-operator-scripts\") pod \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.896862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdv6n\" (UniqueName: \"kubernetes.io/projected/6ee2993e-e4e2-4fda-8506-4af3ea92108f-kube-api-access-cdv6n\") pod \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\" (UID: \"6ee2993e-e4e2-4fda-8506-4af3ea92108f\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.897985 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee2993e-e4e2-4fda-8506-4af3ea92108f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ee2993e-e4e2-4fda-8506-4af3ea92108f" (UID: "6ee2993e-e4e2-4fda-8506-4af3ea92108f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.900828 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee2993e-e4e2-4fda-8506-4af3ea92108f-kube-api-access-cdv6n" (OuterVolumeSpecName: "kube-api-access-cdv6n") pod "6ee2993e-e4e2-4fda-8506-4af3ea92108f" (UID: "6ee2993e-e4e2-4fda-8506-4af3ea92108f"). InnerVolumeSpecName "kube-api-access-cdv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.998303 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f0ae5-3541-4224-8693-6264be64156e-operator-scripts\") pod \"ec4f0ae5-3541-4224-8693-6264be64156e\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.998358 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-462n5\" (UniqueName: \"kubernetes.io/projected/ec4f0ae5-3541-4224-8693-6264be64156e-kube-api-access-462n5\") pod \"ec4f0ae5-3541-4224-8693-6264be64156e\" (UID: \"ec4f0ae5-3541-4224-8693-6264be64156e\") " Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.998787 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec4f0ae5-3541-4224-8693-6264be64156e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec4f0ae5-3541-4224-8693-6264be64156e" (UID: "ec4f0ae5-3541-4224-8693-6264be64156e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.998899 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee2993e-e4e2-4fda-8506-4af3ea92108f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.998913 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec4f0ae5-3541-4224-8693-6264be64156e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:32 crc kubenswrapper[4867]: I0126 11:35:32.998922 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdv6n\" (UniqueName: \"kubernetes.io/projected/6ee2993e-e4e2-4fda-8506-4af3ea92108f-kube-api-access-cdv6n\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.005423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4f0ae5-3541-4224-8693-6264be64156e-kube-api-access-462n5" (OuterVolumeSpecName: "kube-api-access-462n5") pod "ec4f0ae5-3541-4224-8693-6264be64156e" (UID: "ec4f0ae5-3541-4224-8693-6264be64156e"). InnerVolumeSpecName "kube-api-access-462n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.100954 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-462n5\" (UniqueName: \"kubernetes.io/projected/ec4f0ae5-3541-4224-8693-6264be64156e-kube-api-access-462n5\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.326520 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerID="226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5" exitCode=0 Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.326600 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2e582495-d650-404c-9a13-d28ea98ecbc5","Type":"ContainerDied","Data":"226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5"} Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.333847 4867 generic.go:334] "Generic (PLEG): container finished" podID="c491453c-4aa8-458a-8ee3-42475e7678f4" containerID="0b8a22863ccea531a3bb13cd37da122819fc47d06950bba2120f93f63600c55e" exitCode=0 Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.333949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s8jqh" event={"ID":"c491453c-4aa8-458a-8ee3-42475e7678f4","Type":"ContainerDied","Data":"0b8a22863ccea531a3bb13cd37da122819fc47d06950bba2120f93f63600c55e"} Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.340278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bs7ks" event={"ID":"6ee2993e-e4e2-4fda-8506-4af3ea92108f","Type":"ContainerDied","Data":"b2dbcbf29cbe73e2b82a4873c2e1c28b6a9e8060740516378a91212a5dc19574"} Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.340333 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2dbcbf29cbe73e2b82a4873c2e1c28b6a9e8060740516378a91212a5dc19574" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.340400 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs7ks" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.354925 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-14ec-account-create-update-2wrkd" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.356838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-14ec-account-create-update-2wrkd" event={"ID":"ec4f0ae5-3541-4224-8693-6264be64156e","Type":"ContainerDied","Data":"98221a13cb412b52fcd2cf7978f09a4a07dca23bfae5cff2639b67731df0e2dc"} Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.356877 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98221a13cb412b52fcd2cf7978f09a4a07dca23bfae5cff2639b67731df0e2dc" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.356894 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rk4v5"] Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.357020 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-4z4hp" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.366150 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rk4v5"] Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.475919 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-4z4hp"] Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.484757 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-4z4hp"] Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.638341 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.818441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klwvx\" (UniqueName: \"kubernetes.io/projected/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-kube-api-access-klwvx\") pod \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.819663 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-operator-scripts\") pod \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\" (UID: \"f0055f8a-079d-477c-9dab-f6e66fc7e0a0\") " Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.820127 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0055f8a-079d-477c-9dab-f6e66fc7e0a0" (UID: "f0055f8a-079d-477c-9dab-f6e66fc7e0a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.824155 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.829688 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-kube-api-access-klwvx" (OuterVolumeSpecName: "kube-api-access-klwvx") pod "f0055f8a-079d-477c-9dab-f6e66fc7e0a0" (UID: "f0055f8a-079d-477c-9dab-f6e66fc7e0a0"). InnerVolumeSpecName "kube-api-access-klwvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.920948 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fk59\" (UniqueName: \"kubernetes.io/projected/4ad2b2c0-428a-4a2b-943d-91966c6f7403-kube-api-access-7fk59\") pod \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.921002 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad2b2c0-428a-4a2b-943d-91966c6f7403-operator-scripts\") pod \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\" (UID: \"4ad2b2c0-428a-4a2b-943d-91966c6f7403\") " Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.921295 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klwvx\" (UniqueName: \"kubernetes.io/projected/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-kube-api-access-klwvx\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.921316 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0055f8a-079d-477c-9dab-f6e66fc7e0a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.922500 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ad2b2c0-428a-4a2b-943d-91966c6f7403-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ad2b2c0-428a-4a2b-943d-91966c6f7403" (UID: "4ad2b2c0-428a-4a2b-943d-91966c6f7403"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:33 crc kubenswrapper[4867]: I0126 11:35:33.927357 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad2b2c0-428a-4a2b-943d-91966c6f7403-kube-api-access-7fk59" (OuterVolumeSpecName: "kube-api-access-7fk59") pod "4ad2b2c0-428a-4a2b-943d-91966c6f7403" (UID: "4ad2b2c0-428a-4a2b-943d-91966c6f7403"). InnerVolumeSpecName "kube-api-access-7fk59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.033632 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fk59\" (UniqueName: \"kubernetes.io/projected/4ad2b2c0-428a-4a2b-943d-91966c6f7403-kube-api-access-7fk59\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.033676 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad2b2c0-428a-4a2b-943d-91966c6f7403-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.122835 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.129056 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z9jck" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.235799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede5a15e-c616-482a-8f65-dcc40b72bac9-operator-scripts\") pod \"ede5a15e-c616-482a-8f65-dcc40b72bac9\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.236190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shm56\" (UniqueName: \"kubernetes.io/projected/ede5a15e-c616-482a-8f65-dcc40b72bac9-kube-api-access-shm56\") pod \"ede5a15e-c616-482a-8f65-dcc40b72bac9\" (UID: \"ede5a15e-c616-482a-8f65-dcc40b72bac9\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.236305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede5a15e-c616-482a-8f65-dcc40b72bac9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ede5a15e-c616-482a-8f65-dcc40b72bac9" (UID: "ede5a15e-c616-482a-8f65-dcc40b72bac9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.236340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c2ed7-4485-455b-bba2-42014178d9be-operator-scripts\") pod \"c90c2ed7-4485-455b-bba2-42014178d9be\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.236362 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbr9j\" (UniqueName: \"kubernetes.io/projected/c90c2ed7-4485-455b-bba2-42014178d9be-kube-api-access-xbr9j\") pod \"c90c2ed7-4485-455b-bba2-42014178d9be\" (UID: \"c90c2ed7-4485-455b-bba2-42014178d9be\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.236931 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90c2ed7-4485-455b-bba2-42014178d9be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c90c2ed7-4485-455b-bba2-42014178d9be" (UID: "c90c2ed7-4485-455b-bba2-42014178d9be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.237174 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c90c2ed7-4485-455b-bba2-42014178d9be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.237191 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede5a15e-c616-482a-8f65-dcc40b72bac9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.241354 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90c2ed7-4485-455b-bba2-42014178d9be-kube-api-access-xbr9j" (OuterVolumeSpecName: "kube-api-access-xbr9j") pod "c90c2ed7-4485-455b-bba2-42014178d9be" (UID: "c90c2ed7-4485-455b-bba2-42014178d9be"). InnerVolumeSpecName "kube-api-access-xbr9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.242515 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede5a15e-c616-482a-8f65-dcc40b72bac9-kube-api-access-shm56" (OuterVolumeSpecName: "kube-api-access-shm56") pod "ede5a15e-c616-482a-8f65-dcc40b72bac9" (UID: "ede5a15e-c616-482a-8f65-dcc40b72bac9"). InnerVolumeSpecName "kube-api-access-shm56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.339023 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shm56\" (UniqueName: \"kubernetes.io/projected/ede5a15e-c616-482a-8f65-dcc40b72bac9-kube-api-access-shm56\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.339061 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbr9j\" (UniqueName: \"kubernetes.io/projected/c90c2ed7-4485-455b-bba2-42014178d9be-kube-api-access-xbr9j\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.362972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2e582495-d650-404c-9a13-d28ea98ecbc5","Type":"ContainerStarted","Data":"e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3"} Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.364734 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.366990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2xl9x" event={"ID":"ede5a15e-c616-482a-8f65-dcc40b72bac9","Type":"ContainerDied","Data":"a7951ea48fd44934c0eb54a182a2a480a40def4309120eb0ef524e2e038223d2"} Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.367029 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7951ea48fd44934c0eb54a182a2a480a40def4309120eb0ef524e2e038223d2" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.367092 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2xl9x" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.377998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-51a0-account-create-update-lcjf9" event={"ID":"4ad2b2c0-428a-4a2b-943d-91966c6f7403","Type":"ContainerDied","Data":"f857fc9610e5dce1a288a29dc059555a74a0d8c8f9db6d11ebb884f847974ba0"} Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.378049 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f857fc9610e5dce1a288a29dc059555a74a0d8c8f9db6d11ebb884f847974ba0" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.378009 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-51a0-account-create-update-lcjf9" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.379872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d8fb-account-create-update-fpsgc" event={"ID":"f0055f8a-079d-477c-9dab-f6e66fc7e0a0","Type":"ContainerDied","Data":"0aab521bb1fc3b849bdb0d5b9f8e97c1ef8456e3cf94401b68aa1ced01e438f6"} Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.379914 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aab521bb1fc3b849bdb0d5b9f8e97c1ef8456e3cf94401b68aa1ced01e438f6" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.379881 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d8fb-account-create-update-fpsgc" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.381096 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z9jck" event={"ID":"c90c2ed7-4485-455b-bba2-42014178d9be","Type":"ContainerDied","Data":"42b87d218d873c48e675fe68fb0b85d6a31ab506a130f966e0474807d57c487d"} Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.381139 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b87d218d873c48e675fe68fb0b85d6a31ab506a130f966e0474807d57c487d" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.381138 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z9jck" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.392436 4867 generic.go:334] "Generic (PLEG): container finished" podID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerID="b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda" exitCode=0 Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.392888 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a","Type":"ContainerDied","Data":"b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda"} Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.422711 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.57954674 podStartE2EDuration="58.422690704s" podCreationTimestamp="2026-01-26 11:34:36 +0000 UTC" firstStartedPulling="2026-01-26 11:34:44.954935466 +0000 UTC m=+1034.653510376" lastFinishedPulling="2026-01-26 11:34:58.79807942 +0000 UTC m=+1048.496654340" observedRunningTime="2026-01-26 11:35:34.411122741 +0000 UTC m=+1084.109697661" watchObservedRunningTime="2026-01-26 11:35:34.422690704 +0000 UTC m=+1084.121265614" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.597535 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="190b3224-57c6-42d8-8ab0-e026065ff44c" path="/var/lib/kubelet/pods/190b3224-57c6-42d8-8ab0-e026065ff44c/volumes" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.598639 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70cb058d-2165-416d-933a-6b4eeabf42fd" path="/var/lib/kubelet/pods/70cb058d-2165-416d-933a-6b4eeabf42fd/volumes" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.832053 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.954484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-scripts\") pod \"c491453c-4aa8-458a-8ee3-42475e7678f4\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.954900 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-ring-data-devices\") pod \"c491453c-4aa8-458a-8ee3-42475e7678f4\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.954937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-combined-ca-bundle\") pod \"c491453c-4aa8-458a-8ee3-42475e7678f4\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.954961 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-swiftconf\") pod \"c491453c-4aa8-458a-8ee3-42475e7678f4\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.954994 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-dispersionconf\") pod \"c491453c-4aa8-458a-8ee3-42475e7678f4\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.955082 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c491453c-4aa8-458a-8ee3-42475e7678f4-etc-swift\") pod \"c491453c-4aa8-458a-8ee3-42475e7678f4\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.955117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw7wr\" (UniqueName: \"kubernetes.io/projected/c491453c-4aa8-458a-8ee3-42475e7678f4-kube-api-access-dw7wr\") pod \"c491453c-4aa8-458a-8ee3-42475e7678f4\" (UID: \"c491453c-4aa8-458a-8ee3-42475e7678f4\") " Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.955844 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c491453c-4aa8-458a-8ee3-42475e7678f4" (UID: "c491453c-4aa8-458a-8ee3-42475e7678f4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.957023 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c491453c-4aa8-458a-8ee3-42475e7678f4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c491453c-4aa8-458a-8ee3-42475e7678f4" (UID: "c491453c-4aa8-458a-8ee3-42475e7678f4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.964365 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491453c-4aa8-458a-8ee3-42475e7678f4-kube-api-access-dw7wr" (OuterVolumeSpecName: "kube-api-access-dw7wr") pod "c491453c-4aa8-458a-8ee3-42475e7678f4" (UID: "c491453c-4aa8-458a-8ee3-42475e7678f4"). InnerVolumeSpecName "kube-api-access-dw7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.966883 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c491453c-4aa8-458a-8ee3-42475e7678f4" (UID: "c491453c-4aa8-458a-8ee3-42475e7678f4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.985617 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c491453c-4aa8-458a-8ee3-42475e7678f4" (UID: "c491453c-4aa8-458a-8ee3-42475e7678f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.986402 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-scripts" (OuterVolumeSpecName: "scripts") pod "c491453c-4aa8-458a-8ee3-42475e7678f4" (UID: "c491453c-4aa8-458a-8ee3-42475e7678f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:34 crc kubenswrapper[4867]: I0126 11:35:34.992023 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c491453c-4aa8-458a-8ee3-42475e7678f4" (UID: "c491453c-4aa8-458a-8ee3-42475e7678f4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.057799 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c491453c-4aa8-458a-8ee3-42475e7678f4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.057854 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw7wr\" (UniqueName: \"kubernetes.io/projected/c491453c-4aa8-458a-8ee3-42475e7678f4-kube-api-access-dw7wr\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.057872 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.057885 4867 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c491453c-4aa8-458a-8ee3-42475e7678f4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.057898 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.057913 4867 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.057924 4867 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c491453c-4aa8-458a-8ee3-42475e7678f4-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.403964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a","Type":"ContainerStarted","Data":"cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606"} Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.406636 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s8jqh" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.406637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-s8jqh" event={"ID":"c491453c-4aa8-458a-8ee3-42475e7678f4","Type":"ContainerDied","Data":"5593d242d53a2b2d9e492b27f8907490c6788cbae28929e5cc4dda4f6cca0e1a"} Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.406893 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5593d242d53a2b2d9e492b27f8907490c6788cbae28929e5cc4dda4f6cca0e1a" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.466696 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.352880051 podStartE2EDuration="1m0.466677999s" podCreationTimestamp="2026-01-26 11:34:35 +0000 UTC" firstStartedPulling="2026-01-26 11:34:37.694296182 +0000 UTC m=+1027.392871092" lastFinishedPulling="2026-01-26 11:34:58.80809414 +0000 UTC m=+1048.506669040" observedRunningTime="2026-01-26 11:35:35.45421415 +0000 UTC m=+1085.152789060" watchObservedRunningTime="2026-01-26 11:35:35.466677999 +0000 UTC m=+1085.165252909" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.565470 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mjdws"] Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566304 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190b3224-57c6-42d8-8ab0-e026065ff44c" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566343 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="190b3224-57c6-42d8-8ab0-e026065ff44c" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566378 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerName="init" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566387 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerName="init" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566403 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad2b2c0-428a-4a2b-943d-91966c6f7403" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566411 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad2b2c0-428a-4a2b-943d-91966c6f7403" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566425 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0055f8a-079d-477c-9dab-f6e66fc7e0a0" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566433 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0055f8a-079d-477c-9dab-f6e66fc7e0a0" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566447 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4f0ae5-3541-4224-8693-6264be64156e" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566455 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4f0ae5-3541-4224-8693-6264be64156e" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566473 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede5a15e-c616-482a-8f65-dcc40b72bac9" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566482 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede5a15e-c616-482a-8f65-dcc40b72bac9" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566495 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerName="dnsmasq-dns" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566503 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerName="dnsmasq-dns" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566521 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90c2ed7-4485-455b-bba2-42014178d9be" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566529 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90c2ed7-4485-455b-bba2-42014178d9be" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566544 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee2993e-e4e2-4fda-8506-4af3ea92108f" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566551 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee2993e-e4e2-4fda-8506-4af3ea92108f" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: E0126 11:35:35.566564 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c491453c-4aa8-458a-8ee3-42475e7678f4" containerName="swift-ring-rebalance" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566571 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c491453c-4aa8-458a-8ee3-42475e7678f4" containerName="swift-ring-rebalance" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566789 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0055f8a-079d-477c-9dab-f6e66fc7e0a0" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566811 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4f0ae5-3541-4224-8693-6264be64156e" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566825 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90c2ed7-4485-455b-bba2-42014178d9be" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566844 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c491453c-4aa8-458a-8ee3-42475e7678f4" containerName="swift-ring-rebalance" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566852 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="190b3224-57c6-42d8-8ab0-e026065ff44c" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566864 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="70cb058d-2165-416d-933a-6b4eeabf42fd" containerName="dnsmasq-dns" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566874 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede5a15e-c616-482a-8f65-dcc40b72bac9" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566889 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee2993e-e4e2-4fda-8506-4af3ea92108f" containerName="mariadb-database-create" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.566900 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad2b2c0-428a-4a2b-943d-91966c6f7403" containerName="mariadb-account-create-update" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.567727 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.573661 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wqrz8" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.573809 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.577715 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mjdws"] Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.673675 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-combined-ca-bundle\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.674293 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-db-sync-config-data\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.674406 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957kf\" (UniqueName: \"kubernetes.io/projected/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-kube-api-access-957kf\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.674516 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-config-data\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.775712 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-config-data\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.776193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-combined-ca-bundle\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.776715 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-db-sync-config-data\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.776851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-957kf\" (UniqueName: \"kubernetes.io/projected/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-kube-api-access-957kf\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.780427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-db-sync-config-data\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.780495 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-config-data\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.793101 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-combined-ca-bundle\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.793685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-957kf\" (UniqueName: \"kubernetes.io/projected/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-kube-api-access-957kf\") pod \"glance-db-sync-mjdws\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:35 crc kubenswrapper[4867]: I0126 11:35:35.885894 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mjdws" Jan 26 11:35:36 crc kubenswrapper[4867]: I0126 11:35:36.427903 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mjdws"] Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.128301 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zwmkv"] Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.129778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.132075 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.145710 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.146645 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwmkv"] Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.200143 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6bh\" (UniqueName: \"kubernetes.io/projected/23253c55-8557-40e6-9be5-6cebf6e5f412-kube-api-access-kc6bh\") pod \"root-account-create-update-zwmkv\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.200539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23253c55-8557-40e6-9be5-6cebf6e5f412-operator-scripts\") pod \"root-account-create-update-zwmkv\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.301955 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23253c55-8557-40e6-9be5-6cebf6e5f412-operator-scripts\") pod \"root-account-create-update-zwmkv\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.302096 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6bh\" (UniqueName: \"kubernetes.io/projected/23253c55-8557-40e6-9be5-6cebf6e5f412-kube-api-access-kc6bh\") pod \"root-account-create-update-zwmkv\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.308608 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23253c55-8557-40e6-9be5-6cebf6e5f412-operator-scripts\") pod \"root-account-create-update-zwmkv\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.376734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6bh\" (UniqueName: \"kubernetes.io/projected/23253c55-8557-40e6-9be5-6cebf6e5f412-kube-api-access-kc6bh\") pod \"root-account-create-update-zwmkv\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.434326 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mjdws" event={"ID":"fa78acbb-8b93-4977-8ccf-fc79314b6f2e","Type":"ContainerStarted","Data":"231d68a193c100b8af44d3b707d49cc2567ebb0b3f5a4fca02453194e4f91810"} Jan 26 11:35:37 crc kubenswrapper[4867]: I0126 11:35:37.450583 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:38 crc kubenswrapper[4867]: I0126 11:35:38.040269 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwmkv"] Jan 26 11:35:38 crc kubenswrapper[4867]: I0126 11:35:38.445861 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwmkv" event={"ID":"23253c55-8557-40e6-9be5-6cebf6e5f412","Type":"ContainerStarted","Data":"86e2e54b98e4fa4dd7606192db0e6276fe28d138b45eb93d066c11dec8040c34"} Jan 26 11:35:38 crc kubenswrapper[4867]: I0126 11:35:38.446246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwmkv" event={"ID":"23253c55-8557-40e6-9be5-6cebf6e5f412","Type":"ContainerStarted","Data":"bea45778f9efadea336747ddf90f5ec715a6a262969b5bbb0dfac4b1ba7d6a38"} Jan 26 11:35:38 crc kubenswrapper[4867]: I0126 11:35:38.469551 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zwmkv" podStartSLOduration=1.469528419 podStartE2EDuration="1.469528419s" podCreationTimestamp="2026-01-26 11:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:38.461059842 +0000 UTC m=+1088.159634762" watchObservedRunningTime="2026-01-26 11:35:38.469528419 +0000 UTC m=+1088.168103359" Jan 26 11:35:40 crc kubenswrapper[4867]: I0126 11:35:40.723771 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hbpxr" podUID="db65f713-855b-4ca7-b989-ebde989474ce" containerName="ovn-controller" probeResult="failure" output=< Jan 26 11:35:40 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 11:35:40 crc kubenswrapper[4867]: > Jan 26 11:35:41 crc kubenswrapper[4867]: I0126 11:35:41.203744 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 11:35:42 crc kubenswrapper[4867]: I0126 11:35:42.476835 4867 generic.go:334] "Generic (PLEG): container finished" podID="23253c55-8557-40e6-9be5-6cebf6e5f412" containerID="86e2e54b98e4fa4dd7606192db0e6276fe28d138b45eb93d066c11dec8040c34" exitCode=0 Jan 26 11:35:42 crc kubenswrapper[4867]: I0126 11:35:42.476899 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwmkv" event={"ID":"23253c55-8557-40e6-9be5-6cebf6e5f412","Type":"ContainerDied","Data":"86e2e54b98e4fa4dd7606192db0e6276fe28d138b45eb93d066c11dec8040c34"} Jan 26 11:35:45 crc kubenswrapper[4867]: I0126 11:35:45.131257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:45 crc kubenswrapper[4867]: I0126 11:35:45.143527 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3f128154-6619-4556-be1b-73e44d4f7df1-etc-swift\") pod \"swift-storage-0\" (UID: \"3f128154-6619-4556-be1b-73e44d4f7df1\") " pod="openstack/swift-storage-0" Jan 26 11:35:45 crc kubenswrapper[4867]: I0126 11:35:45.274929 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 11:35:45 crc kubenswrapper[4867]: I0126 11:35:45.733531 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hbpxr" podUID="db65f713-855b-4ca7-b989-ebde989474ce" containerName="ovn-controller" probeResult="failure" output=< Jan 26 11:35:45 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 11:35:45 crc kubenswrapper[4867]: > Jan 26 11:35:45 crc kubenswrapper[4867]: I0126 11:35:45.739313 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:35:45 crc kubenswrapper[4867]: I0126 11:35:45.747813 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4f5h4" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.115778 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hbpxr-config-46f2c"] Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.117436 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.119919 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.136244 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hbpxr-config-46f2c"] Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.151517 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run-ovn\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.151585 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-scripts\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.151625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-log-ovn\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.151725 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zh9w\" (UniqueName: \"kubernetes.io/projected/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-kube-api-access-5zh9w\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.151761 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.151797 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-additional-scripts\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zh9w\" (UniqueName: \"kubernetes.io/projected/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-kube-api-access-5zh9w\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253275 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253306 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-additional-scripts\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253380 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run-ovn\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253409 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-scripts\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253431 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-log-ovn\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253733 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-log-ovn\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.253820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.254210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-additional-scripts\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.255436 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-scripts\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.255553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run-ovn\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.288820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zh9w\" (UniqueName: \"kubernetes.io/projected/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-kube-api-access-5zh9w\") pod \"ovn-controller-hbpxr-config-46f2c\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:46 crc kubenswrapper[4867]: I0126 11:35:46.444022 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.148460 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.462484 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-252qd"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.464200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.472100 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-252qd"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.523253 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.540041 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zh4k7"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.543971 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.568943 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zh4k7"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.582647 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e2f3-0d4b-46cc-847b-497423d48fcc-operator-scripts\") pod \"cinder-db-create-252qd\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.582716 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cskwr\" (UniqueName: \"kubernetes.io/projected/0504e2f3-0d4b-46cc-847b-497423d48fcc-kube-api-access-cskwr\") pod \"cinder-db-create-252qd\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.600573 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0f8e-account-create-update-plnlx"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.601939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.605522 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.617179 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f8e-account-create-update-plnlx"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.686018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e2f3-0d4b-46cc-847b-497423d48fcc-operator-scripts\") pod \"cinder-db-create-252qd\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.686100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t56rr\" (UniqueName: \"kubernetes.io/projected/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-kube-api-access-t56rr\") pod \"barbican-db-create-zh4k7\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.686127 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cskwr\" (UniqueName: \"kubernetes.io/projected/0504e2f3-0d4b-46cc-847b-497423d48fcc-kube-api-access-cskwr\") pod \"cinder-db-create-252qd\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.686204 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-operator-scripts\") pod \"barbican-db-create-zh4k7\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.688422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e2f3-0d4b-46cc-847b-497423d48fcc-operator-scripts\") pod \"cinder-db-create-252qd\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.721126 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cskwr\" (UniqueName: \"kubernetes.io/projected/0504e2f3-0d4b-46cc-847b-497423d48fcc-kube-api-access-cskwr\") pod \"cinder-db-create-252qd\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.721499 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ba1b-account-create-update-w22mk"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.722835 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.725590 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.726646 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ba1b-account-create-update-w22mk"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.787184 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-252qd" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.788268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t56rr\" (UniqueName: \"kubernetes.io/projected/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-kube-api-access-t56rr\") pod \"barbican-db-create-zh4k7\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.788344 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-operator-scripts\") pod \"barbican-db-create-zh4k7\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.788408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc696\" (UniqueName: \"kubernetes.io/projected/dd5d8576-e5a4-4afe-b859-3f199ca48359-kube-api-access-dc696\") pod \"barbican-0f8e-account-create-update-plnlx\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.788433 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5d8576-e5a4-4afe-b859-3f199ca48359-operator-scripts\") pod \"barbican-0f8e-account-create-update-plnlx\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.789400 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-operator-scripts\") pod \"barbican-db-create-zh4k7\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.831314 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t56rr\" (UniqueName: \"kubernetes.io/projected/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-kube-api-access-t56rr\") pod \"barbican-db-create-zh4k7\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.845515 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5d6gw"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.846858 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.855372 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c2b0-account-create-update-ljmf8"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.856708 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.860389 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.865392 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5d6gw"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.867419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.874461 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c2b0-account-create-update-ljmf8"] Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.890388 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc696\" (UniqueName: \"kubernetes.io/projected/dd5d8576-e5a4-4afe-b859-3f199ca48359-kube-api-access-dc696\") pod \"barbican-0f8e-account-create-update-plnlx\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.890434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5d8576-e5a4-4afe-b859-3f199ca48359-operator-scripts\") pod \"barbican-0f8e-account-create-update-plnlx\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.890470 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-operator-scripts\") pod \"cinder-ba1b-account-create-update-w22mk\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.890492 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wppk4\" (UniqueName: \"kubernetes.io/projected/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-kube-api-access-wppk4\") pod \"cinder-ba1b-account-create-update-w22mk\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.891573 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5d8576-e5a4-4afe-b859-3f199ca48359-operator-scripts\") pod \"barbican-0f8e-account-create-update-plnlx\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.912141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc696\" (UniqueName: \"kubernetes.io/projected/dd5d8576-e5a4-4afe-b859-3f199ca48359-kube-api-access-dc696\") pod \"barbican-0f8e-account-create-update-plnlx\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.926727 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.993235 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255d1723-a5b7-4030-b2a0-4b28ee758717-operator-scripts\") pod \"neutron-c2b0-account-create-update-ljmf8\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.993428 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6qzr\" (UniqueName: \"kubernetes.io/projected/255d1723-a5b7-4030-b2a0-4b28ee758717-kube-api-access-g6qzr\") pod \"neutron-c2b0-account-create-update-ljmf8\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.993505 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4pxb\" (UniqueName: \"kubernetes.io/projected/3e1ea464-c670-4943-8788-7718c1ebffa2-kube-api-access-r4pxb\") pod \"neutron-db-create-5d6gw\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.993574 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-operator-scripts\") pod \"cinder-ba1b-account-create-update-w22mk\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.993630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wppk4\" (UniqueName: \"kubernetes.io/projected/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-kube-api-access-wppk4\") pod \"cinder-ba1b-account-create-update-w22mk\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.993666 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ea464-c670-4943-8788-7718c1ebffa2-operator-scripts\") pod \"neutron-db-create-5d6gw\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:47 crc kubenswrapper[4867]: I0126 11:35:47.994395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-operator-scripts\") pod \"cinder-ba1b-account-create-update-w22mk\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.009796 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wppk4\" (UniqueName: \"kubernetes.io/projected/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-kube-api-access-wppk4\") pod \"cinder-ba1b-account-create-update-w22mk\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.063945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-z9wf6"] Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.065004 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.067392 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.067427 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r6w6v" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.067493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.067564 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.068468 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.080971 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z9wf6"] Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.101680 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6qzr\" (UniqueName: \"kubernetes.io/projected/255d1723-a5b7-4030-b2a0-4b28ee758717-kube-api-access-g6qzr\") pod \"neutron-c2b0-account-create-update-ljmf8\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.101727 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4pxb\" (UniqueName: \"kubernetes.io/projected/3e1ea464-c670-4943-8788-7718c1ebffa2-kube-api-access-r4pxb\") pod \"neutron-db-create-5d6gw\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.101760 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ea464-c670-4943-8788-7718c1ebffa2-operator-scripts\") pod \"neutron-db-create-5d6gw\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.101848 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255d1723-a5b7-4030-b2a0-4b28ee758717-operator-scripts\") pod \"neutron-c2b0-account-create-update-ljmf8\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.102984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255d1723-a5b7-4030-b2a0-4b28ee758717-operator-scripts\") pod \"neutron-c2b0-account-create-update-ljmf8\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.103935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ea464-c670-4943-8788-7718c1ebffa2-operator-scripts\") pod \"neutron-db-create-5d6gw\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.121842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6qzr\" (UniqueName: \"kubernetes.io/projected/255d1723-a5b7-4030-b2a0-4b28ee758717-kube-api-access-g6qzr\") pod \"neutron-c2b0-account-create-update-ljmf8\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.123275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4pxb\" (UniqueName: \"kubernetes.io/projected/3e1ea464-c670-4943-8788-7718c1ebffa2-kube-api-access-r4pxb\") pod \"neutron-db-create-5d6gw\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.186307 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.196733 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.203358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bjt\" (UniqueName: \"kubernetes.io/projected/054c5880-216a-4d98-bbc3-bc428d09bfe8-kube-api-access-k5bjt\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.203396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-config-data\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.203426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-combined-ca-bundle\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.306403 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bjt\" (UniqueName: \"kubernetes.io/projected/054c5880-216a-4d98-bbc3-bc428d09bfe8-kube-api-access-k5bjt\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.306453 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-config-data\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.306481 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-combined-ca-bundle\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.313926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-combined-ca-bundle\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.314402 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-config-data\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.333924 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bjt\" (UniqueName: \"kubernetes.io/projected/054c5880-216a-4d98-bbc3-bc428d09bfe8-kube-api-access-k5bjt\") pod \"keystone-db-sync-z9wf6\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:48 crc kubenswrapper[4867]: I0126 11:35:48.386763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:35:50 crc kubenswrapper[4867]: I0126 11:35:50.732841 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hbpxr" podUID="db65f713-855b-4ca7-b989-ebde989474ce" containerName="ovn-controller" probeResult="failure" output=< Jan 26 11:35:50 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 11:35:50 crc kubenswrapper[4867]: > Jan 26 11:35:52 crc kubenswrapper[4867]: E0126 11:35:52.013756 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 26 11:35:52 crc kubenswrapper[4867]: E0126 11:35:52.014816 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-957kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mjdws_openstack(fa78acbb-8b93-4977-8ccf-fc79314b6f2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:35:52 crc kubenswrapper[4867]: E0126 11:35:52.016380 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mjdws" podUID="fa78acbb-8b93-4977-8ccf-fc79314b6f2e" Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.070082 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.191885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6bh\" (UniqueName: \"kubernetes.io/projected/23253c55-8557-40e6-9be5-6cebf6e5f412-kube-api-access-kc6bh\") pod \"23253c55-8557-40e6-9be5-6cebf6e5f412\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.191983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23253c55-8557-40e6-9be5-6cebf6e5f412-operator-scripts\") pod \"23253c55-8557-40e6-9be5-6cebf6e5f412\" (UID: \"23253c55-8557-40e6-9be5-6cebf6e5f412\") " Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.193349 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23253c55-8557-40e6-9be5-6cebf6e5f412-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23253c55-8557-40e6-9be5-6cebf6e5f412" (UID: "23253c55-8557-40e6-9be5-6cebf6e5f412"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.203123 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23253c55-8557-40e6-9be5-6cebf6e5f412-kube-api-access-kc6bh" (OuterVolumeSpecName: "kube-api-access-kc6bh") pod "23253c55-8557-40e6-9be5-6cebf6e5f412" (UID: "23253c55-8557-40e6-9be5-6cebf6e5f412"). InnerVolumeSpecName "kube-api-access-kc6bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.296920 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6bh\" (UniqueName: \"kubernetes.io/projected/23253c55-8557-40e6-9be5-6cebf6e5f412-kube-api-access-kc6bh\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.296967 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23253c55-8557-40e6-9be5-6cebf6e5f412-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.485833 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0f8e-account-create-update-plnlx"] Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.585455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f8e-account-create-update-plnlx" event={"ID":"dd5d8576-e5a4-4afe-b859-3f199ca48359","Type":"ContainerStarted","Data":"7ed4915a5686a77d97cc41b059dcd33a318659c6c3d1bbffc1b0e358d9a435f5"} Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.585701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwmkv" event={"ID":"23253c55-8557-40e6-9be5-6cebf6e5f412","Type":"ContainerDied","Data":"bea45778f9efadea336747ddf90f5ec715a6a262969b5bbb0dfac4b1ba7d6a38"} Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.585778 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bea45778f9efadea336747ddf90f5ec715a6a262969b5bbb0dfac4b1ba7d6a38" Jan 26 11:35:52 crc kubenswrapper[4867]: I0126 11:35:52.586337 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwmkv" Jan 26 11:35:52 crc kubenswrapper[4867]: E0126 11:35:52.589840 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-mjdws" podUID="fa78acbb-8b93-4977-8ccf-fc79314b6f2e" Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.037417 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zh4k7"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.060398 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-252qd"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.075893 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z9wf6"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.091364 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c2b0-account-create-update-ljmf8"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.102335 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ba1b-account-create-update-w22mk"] Jan 26 11:35:53 crc kubenswrapper[4867]: W0126 11:35:53.109447 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e1ea464_c670_4943_8788_7718c1ebffa2.slice/crio-dc6d1cabe51a5cc28504440335c6bc9eba4055c481e5a82009356e55b376f047 WatchSource:0}: Error finding container dc6d1cabe51a5cc28504440335c6bc9eba4055c481e5a82009356e55b376f047: Status 404 returned error can't find the container with id dc6d1cabe51a5cc28504440335c6bc9eba4055c481e5a82009356e55b376f047 Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.111652 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5d6gw"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.121414 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hbpxr-config-46f2c"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.203597 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 11:35:53 crc kubenswrapper[4867]: W0126 11:35:53.234373 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f128154_6619_4556_be1b_73e44d4f7df1.slice/crio-b6de591a3b75e01789abd1859cbde01eed55cc5bbcebce275cb6ff286f472b41 WatchSource:0}: Error finding container b6de591a3b75e01789abd1859cbde01eed55cc5bbcebce275cb6ff286f472b41: Status 404 returned error can't find the container with id b6de591a3b75e01789abd1859cbde01eed55cc5bbcebce275cb6ff286f472b41 Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.384731 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zwmkv"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.412727 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zwmkv"] Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.595535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hbpxr-config-46f2c" event={"ID":"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e","Type":"ContainerStarted","Data":"7cf8a07d48202d0972c6df4a8e95b1455695a9693610e03b02c2887a1bd7b381"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.595592 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hbpxr-config-46f2c" event={"ID":"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e","Type":"ContainerStarted","Data":"838229b4b59a748300a0930c7eca70e8d220416995d42d4a2a9839022d46640a"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.598275 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"b6de591a3b75e01789abd1859cbde01eed55cc5bbcebce275cb6ff286f472b41"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.600736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5d6gw" event={"ID":"3e1ea464-c670-4943-8788-7718c1ebffa2","Type":"ContainerStarted","Data":"25d1627cd2a644c9c81616c499be120f39117bea26b0a7b01c0cad6271dbd577"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.600774 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5d6gw" event={"ID":"3e1ea464-c670-4943-8788-7718c1ebffa2","Type":"ContainerStarted","Data":"dc6d1cabe51a5cc28504440335c6bc9eba4055c481e5a82009356e55b376f047"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.603890 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z9wf6" event={"ID":"054c5880-216a-4d98-bbc3-bc428d09bfe8","Type":"ContainerStarted","Data":"8500214f1eddaf00317a0e1c018690cb6c363b79e318fe1f321abba08eb5f884"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.606283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-252qd" event={"ID":"0504e2f3-0d4b-46cc-847b-497423d48fcc","Type":"ContainerStarted","Data":"c896b039eb30d5f99f9be2d1a482f83cac315acf2ac8a2ce7d39bd928229b476"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.606309 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-252qd" event={"ID":"0504e2f3-0d4b-46cc-847b-497423d48fcc","Type":"ContainerStarted","Data":"938804e2a1cece7b590082046da0666c3a24fe12a0ce50c9a156aa19a9785856"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.607906 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd5d8576-e5a4-4afe-b859-3f199ca48359" containerID="9b827a2c6ea65163706cdf8f6e73946db57fcbc1f06dc7a5e0eba5084e7fd1ae" exitCode=0 Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.607955 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f8e-account-create-update-plnlx" event={"ID":"dd5d8576-e5a4-4afe-b859-3f199ca48359","Type":"ContainerDied","Data":"9b827a2c6ea65163706cdf8f6e73946db57fcbc1f06dc7a5e0eba5084e7fd1ae"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.609648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c2b0-account-create-update-ljmf8" event={"ID":"255d1723-a5b7-4030-b2a0-4b28ee758717","Type":"ContainerStarted","Data":"fe0c26349fc460e9b1940b8eaf3b6046d26a2164ca191977a1cce6e1f42a7419"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.609681 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c2b0-account-create-update-ljmf8" event={"ID":"255d1723-a5b7-4030-b2a0-4b28ee758717","Type":"ContainerStarted","Data":"56f62eb5a5475b95a6667f4751177de696543956d1c478bae6cf46563de52730"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.611524 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zh4k7" event={"ID":"5ad3fda5-db71-4cad-b88c-ca0665f64b9d","Type":"ContainerStarted","Data":"a674678dfd1454a894bf5548d8d62962409fb2a59015d097a3d44c315fda04ef"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.611550 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zh4k7" event={"ID":"5ad3fda5-db71-4cad-b88c-ca0665f64b9d","Type":"ContainerStarted","Data":"7c13abb698f0e4641d08ab581a2d37b431991259799540f882da69a51772397c"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.613360 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba1b-account-create-update-w22mk" event={"ID":"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae","Type":"ContainerStarted","Data":"fdb2125bd3a23667e6cc7ce47ecc814f688ace17ef47a446b046e25908500e03"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.613389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba1b-account-create-update-w22mk" event={"ID":"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae","Type":"ContainerStarted","Data":"02a6b083657c9e3220b9571c8a1a62542adf4273addc6bf7fc5a9202061eb040"} Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.647411 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hbpxr-config-46f2c" podStartSLOduration=7.647383239 podStartE2EDuration="7.647383239s" podCreationTimestamp="2026-01-26 11:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:53.622630708 +0000 UTC m=+1103.321205618" watchObservedRunningTime="2026-01-26 11:35:53.647383239 +0000 UTC m=+1103.345958169" Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.659324 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c2b0-account-create-update-ljmf8" podStartSLOduration=6.659302448 podStartE2EDuration="6.659302448s" podCreationTimestamp="2026-01-26 11:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:53.658365073 +0000 UTC m=+1103.356939983" watchObservedRunningTime="2026-01-26 11:35:53.659302448 +0000 UTC m=+1103.357877358" Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.681633 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-5d6gw" podStartSLOduration=6.681617644 podStartE2EDuration="6.681617644s" podCreationTimestamp="2026-01-26 11:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:53.674844133 +0000 UTC m=+1103.373419043" watchObservedRunningTime="2026-01-26 11:35:53.681617644 +0000 UTC m=+1103.380192554" Jan 26 11:35:53 crc kubenswrapper[4867]: I0126 11:35:53.724056 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-zh4k7" podStartSLOduration=6.724037066 podStartE2EDuration="6.724037066s" podCreationTimestamp="2026-01-26 11:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:53.719626438 +0000 UTC m=+1103.418201348" watchObservedRunningTime="2026-01-26 11:35:53.724037066 +0000 UTC m=+1103.422611976" Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.574913 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23253c55-8557-40e6-9be5-6cebf6e5f412" path="/var/lib/kubelet/pods/23253c55-8557-40e6-9be5-6cebf6e5f412/volumes" Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.624858 4867 generic.go:334] "Generic (PLEG): container finished" podID="3e1ea464-c670-4943-8788-7718c1ebffa2" containerID="25d1627cd2a644c9c81616c499be120f39117bea26b0a7b01c0cad6271dbd577" exitCode=0 Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.624952 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5d6gw" event={"ID":"3e1ea464-c670-4943-8788-7718c1ebffa2","Type":"ContainerDied","Data":"25d1627cd2a644c9c81616c499be120f39117bea26b0a7b01c0cad6271dbd577"} Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.628707 4867 generic.go:334] "Generic (PLEG): container finished" podID="4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" containerID="7cf8a07d48202d0972c6df4a8e95b1455695a9693610e03b02c2887a1bd7b381" exitCode=0 Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.628751 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hbpxr-config-46f2c" event={"ID":"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e","Type":"ContainerDied","Data":"7cf8a07d48202d0972c6df4a8e95b1455695a9693610e03b02c2887a1bd7b381"} Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.630400 4867 generic.go:334] "Generic (PLEG): container finished" podID="0504e2f3-0d4b-46cc-847b-497423d48fcc" containerID="c896b039eb30d5f99f9be2d1a482f83cac315acf2ac8a2ce7d39bd928229b476" exitCode=0 Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.630468 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-252qd" event={"ID":"0504e2f3-0d4b-46cc-847b-497423d48fcc","Type":"ContainerDied","Data":"c896b039eb30d5f99f9be2d1a482f83cac315acf2ac8a2ce7d39bd928229b476"} Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.631834 4867 generic.go:334] "Generic (PLEG): container finished" podID="255d1723-a5b7-4030-b2a0-4b28ee758717" containerID="fe0c26349fc460e9b1940b8eaf3b6046d26a2164ca191977a1cce6e1f42a7419" exitCode=0 Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.631907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c2b0-account-create-update-ljmf8" event={"ID":"255d1723-a5b7-4030-b2a0-4b28ee758717","Type":"ContainerDied","Data":"fe0c26349fc460e9b1940b8eaf3b6046d26a2164ca191977a1cce6e1f42a7419"} Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.633123 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ad3fda5-db71-4cad-b88c-ca0665f64b9d" containerID="a674678dfd1454a894bf5548d8d62962409fb2a59015d097a3d44c315fda04ef" exitCode=0 Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.633167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zh4k7" event={"ID":"5ad3fda5-db71-4cad-b88c-ca0665f64b9d","Type":"ContainerDied","Data":"a674678dfd1454a894bf5548d8d62962409fb2a59015d097a3d44c315fda04ef"} Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.636086 4867 generic.go:334] "Generic (PLEG): container finished" podID="e2d7d8be-7aac-4f1c-95a7-25021c4d24ae" containerID="fdb2125bd3a23667e6cc7ce47ecc814f688ace17ef47a446b046e25908500e03" exitCode=0 Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.636322 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba1b-account-create-update-w22mk" event={"ID":"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae","Type":"ContainerDied","Data":"fdb2125bd3a23667e6cc7ce47ecc814f688ace17ef47a446b046e25908500e03"} Jan 26 11:35:54 crc kubenswrapper[4867]: I0126 11:35:54.647967 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ba1b-account-create-update-w22mk" podStartSLOduration=7.64794051 podStartE2EDuration="7.64794051s" podCreationTimestamp="2026-01-26 11:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:35:53.74178178 +0000 UTC m=+1103.440356690" watchObservedRunningTime="2026-01-26 11:35:54.64794051 +0000 UTC m=+1104.346515420" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.106390 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-252qd" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.115396 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.261797 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e2f3-0d4b-46cc-847b-497423d48fcc-operator-scripts\") pod \"0504e2f3-0d4b-46cc-847b-497423d48fcc\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.262038 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc696\" (UniqueName: \"kubernetes.io/projected/dd5d8576-e5a4-4afe-b859-3f199ca48359-kube-api-access-dc696\") pod \"dd5d8576-e5a4-4afe-b859-3f199ca48359\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.262148 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cskwr\" (UniqueName: \"kubernetes.io/projected/0504e2f3-0d4b-46cc-847b-497423d48fcc-kube-api-access-cskwr\") pod \"0504e2f3-0d4b-46cc-847b-497423d48fcc\" (UID: \"0504e2f3-0d4b-46cc-847b-497423d48fcc\") " Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.262178 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5d8576-e5a4-4afe-b859-3f199ca48359-operator-scripts\") pod \"dd5d8576-e5a4-4afe-b859-3f199ca48359\" (UID: \"dd5d8576-e5a4-4afe-b859-3f199ca48359\") " Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.263423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5d8576-e5a4-4afe-b859-3f199ca48359-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd5d8576-e5a4-4afe-b859-3f199ca48359" (UID: "dd5d8576-e5a4-4afe-b859-3f199ca48359"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.264175 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0504e2f3-0d4b-46cc-847b-497423d48fcc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0504e2f3-0d4b-46cc-847b-497423d48fcc" (UID: "0504e2f3-0d4b-46cc-847b-497423d48fcc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.278522 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0504e2f3-0d4b-46cc-847b-497423d48fcc-kube-api-access-cskwr" (OuterVolumeSpecName: "kube-api-access-cskwr") pod "0504e2f3-0d4b-46cc-847b-497423d48fcc" (UID: "0504e2f3-0d4b-46cc-847b-497423d48fcc"). InnerVolumeSpecName "kube-api-access-cskwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.286053 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5d8576-e5a4-4afe-b859-3f199ca48359-kube-api-access-dc696" (OuterVolumeSpecName: "kube-api-access-dc696") pod "dd5d8576-e5a4-4afe-b859-3f199ca48359" (UID: "dd5d8576-e5a4-4afe-b859-3f199ca48359"). InnerVolumeSpecName "kube-api-access-dc696". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.364539 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cskwr\" (UniqueName: \"kubernetes.io/projected/0504e2f3-0d4b-46cc-847b-497423d48fcc-kube-api-access-cskwr\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.364647 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5d8576-e5a4-4afe-b859-3f199ca48359-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.364682 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0504e2f3-0d4b-46cc-847b-497423d48fcc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.364691 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc696\" (UniqueName: \"kubernetes.io/projected/dd5d8576-e5a4-4afe-b859-3f199ca48359-kube-api-access-dc696\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.644968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-252qd" event={"ID":"0504e2f3-0d4b-46cc-847b-497423d48fcc","Type":"ContainerDied","Data":"938804e2a1cece7b590082046da0666c3a24fe12a0ce50c9a156aa19a9785856"} Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.645060 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="938804e2a1cece7b590082046da0666c3a24fe12a0ce50c9a156aa19a9785856" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.644989 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-252qd" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.647023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0f8e-account-create-update-plnlx" event={"ID":"dd5d8576-e5a4-4afe-b859-3f199ca48359","Type":"ContainerDied","Data":"7ed4915a5686a77d97cc41b059dcd33a318659c6c3d1bbffc1b0e358d9a435f5"} Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.647071 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ed4915a5686a77d97cc41b059dcd33a318659c6c3d1bbffc1b0e358d9a435f5" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.647074 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0f8e-account-create-update-plnlx" Jan 26 11:35:55 crc kubenswrapper[4867]: I0126 11:35:55.749666 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hbpxr" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.225655 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s9d6v"] Jan 26 11:35:57 crc kubenswrapper[4867]: E0126 11:35:57.226642 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0504e2f3-0d4b-46cc-847b-497423d48fcc" containerName="mariadb-database-create" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.226666 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0504e2f3-0d4b-46cc-847b-497423d48fcc" containerName="mariadb-database-create" Jan 26 11:35:57 crc kubenswrapper[4867]: E0126 11:35:57.226695 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5d8576-e5a4-4afe-b859-3f199ca48359" containerName="mariadb-account-create-update" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.226703 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5d8576-e5a4-4afe-b859-3f199ca48359" containerName="mariadb-account-create-update" Jan 26 11:35:57 crc kubenswrapper[4867]: E0126 11:35:57.226752 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23253c55-8557-40e6-9be5-6cebf6e5f412" containerName="mariadb-account-create-update" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.226764 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="23253c55-8557-40e6-9be5-6cebf6e5f412" containerName="mariadb-account-create-update" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.226985 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="23253c55-8557-40e6-9be5-6cebf6e5f412" containerName="mariadb-account-create-update" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.227001 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0504e2f3-0d4b-46cc-847b-497423d48fcc" containerName="mariadb-database-create" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.227014 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5d8576-e5a4-4afe-b859-3f199ca48359" containerName="mariadb-account-create-update" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.227803 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.228311 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s9d6v"] Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.231287 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.314310 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpqk\" (UniqueName: \"kubernetes.io/projected/0d501c78-55c7-4cda-b585-2b58737107aa-kube-api-access-jlpqk\") pod \"root-account-create-update-s9d6v\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.314480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d501c78-55c7-4cda-b585-2b58737107aa-operator-scripts\") pod \"root-account-create-update-s9d6v\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.416268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d501c78-55c7-4cda-b585-2b58737107aa-operator-scripts\") pod \"root-account-create-update-s9d6v\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.416345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpqk\" (UniqueName: \"kubernetes.io/projected/0d501c78-55c7-4cda-b585-2b58737107aa-kube-api-access-jlpqk\") pod \"root-account-create-update-s9d6v\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.417109 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d501c78-55c7-4cda-b585-2b58737107aa-operator-scripts\") pod \"root-account-create-update-s9d6v\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.448195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpqk\" (UniqueName: \"kubernetes.io/projected/0d501c78-55c7-4cda-b585-2b58737107aa-kube-api-access-jlpqk\") pod \"root-account-create-update-s9d6v\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:57 crc kubenswrapper[4867]: I0126 11:35:57.552847 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s9d6v" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.439326 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.543828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6qzr\" (UniqueName: \"kubernetes.io/projected/255d1723-a5b7-4030-b2a0-4b28ee758717-kube-api-access-g6qzr\") pod \"255d1723-a5b7-4030-b2a0-4b28ee758717\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.544113 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255d1723-a5b7-4030-b2a0-4b28ee758717-operator-scripts\") pod \"255d1723-a5b7-4030-b2a0-4b28ee758717\" (UID: \"255d1723-a5b7-4030-b2a0-4b28ee758717\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.545011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255d1723-a5b7-4030-b2a0-4b28ee758717-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "255d1723-a5b7-4030-b2a0-4b28ee758717" (UID: "255d1723-a5b7-4030-b2a0-4b28ee758717"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.551808 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255d1723-a5b7-4030-b2a0-4b28ee758717-kube-api-access-g6qzr" (OuterVolumeSpecName: "kube-api-access-g6qzr") pod "255d1723-a5b7-4030-b2a0-4b28ee758717" (UID: "255d1723-a5b7-4030-b2a0-4b28ee758717"). InnerVolumeSpecName "kube-api-access-g6qzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.627283 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.653714 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6qzr\" (UniqueName: \"kubernetes.io/projected/255d1723-a5b7-4030-b2a0-4b28ee758717-kube-api-access-g6qzr\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.654101 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255d1723-a5b7-4030-b2a0-4b28ee758717-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.679164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hbpxr-config-46f2c" event={"ID":"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e","Type":"ContainerDied","Data":"838229b4b59a748300a0930c7eca70e8d220416995d42d4a2a9839022d46640a"} Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.679209 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="838229b4b59a748300a0930c7eca70e8d220416995d42d4a2a9839022d46640a" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.680737 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c2b0-account-create-update-ljmf8" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.684667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c2b0-account-create-update-ljmf8" event={"ID":"255d1723-a5b7-4030-b2a0-4b28ee758717","Type":"ContainerDied","Data":"56f62eb5a5475b95a6667f4751177de696543956d1c478bae6cf46563de52730"} Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.684717 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f62eb5a5475b95a6667f4751177de696543956d1c478bae6cf46563de52730" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.686789 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zh4k7" event={"ID":"5ad3fda5-db71-4cad-b88c-ca0665f64b9d","Type":"ContainerDied","Data":"7c13abb698f0e4641d08ab581a2d37b431991259799540f882da69a51772397c"} Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.686812 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c13abb698f0e4641d08ab581a2d37b431991259799540f882da69a51772397c" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.689114 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba1b-account-create-update-w22mk" event={"ID":"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae","Type":"ContainerDied","Data":"02a6b083657c9e3220b9571c8a1a62542adf4273addc6bf7fc5a9202061eb040"} Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.689145 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a6b083657c9e3220b9571c8a1a62542adf4273addc6bf7fc5a9202061eb040" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.689185 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba1b-account-create-update-w22mk" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.690760 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5d6gw" event={"ID":"3e1ea464-c670-4943-8788-7718c1ebffa2","Type":"ContainerDied","Data":"dc6d1cabe51a5cc28504440335c6bc9eba4055c481e5a82009356e55b376f047"} Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.690801 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc6d1cabe51a5cc28504440335c6bc9eba4055c481e5a82009356e55b376f047" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.701990 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.731509 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.744878 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.755583 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2d7d8be-7aac-4f1c-95a7-25021c4d24ae" (UID: "e2d7d8be-7aac-4f1c-95a7-25021c4d24ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.755105 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-operator-scripts\") pod \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.756511 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wppk4\" (UniqueName: \"kubernetes.io/projected/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-kube-api-access-wppk4\") pod \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\" (UID: \"e2d7d8be-7aac-4f1c-95a7-25021c4d24ae\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.758013 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.764364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-kube-api-access-wppk4" (OuterVolumeSpecName: "kube-api-access-wppk4") pod "e2d7d8be-7aac-4f1c-95a7-25021c4d24ae" (UID: "e2d7d8be-7aac-4f1c-95a7-25021c4d24ae"). InnerVolumeSpecName "kube-api-access-wppk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.865793 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run\") pod \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866160 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zh9w\" (UniqueName: \"kubernetes.io/projected/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-kube-api-access-5zh9w\") pod \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866462 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ea464-c670-4943-8788-7718c1ebffa2-operator-scripts\") pod \"3e1ea464-c670-4943-8788-7718c1ebffa2\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866575 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-additional-scripts\") pod \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866652 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-log-ovn\") pod \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4pxb\" (UniqueName: \"kubernetes.io/projected/3e1ea464-c670-4943-8788-7718c1ebffa2-kube-api-access-r4pxb\") pod \"3e1ea464-c670-4943-8788-7718c1ebffa2\" (UID: \"3e1ea464-c670-4943-8788-7718c1ebffa2\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866886 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-operator-scripts\") pod \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-scripts\") pod \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.867102 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t56rr\" (UniqueName: \"kubernetes.io/projected/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-kube-api-access-t56rr\") pod \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\" (UID: \"5ad3fda5-db71-4cad-b88c-ca0665f64b9d\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.867190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run-ovn\") pod \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\" (UID: \"4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e\") " Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.868358 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wppk4\" (UniqueName: \"kubernetes.io/projected/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae-kube-api-access-wppk4\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.866056 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run" (OuterVolumeSpecName: "var-run") pod "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" (UID: "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.868530 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" (UID: "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.869192 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1ea464-c670-4943-8788-7718c1ebffa2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e1ea464-c670-4943-8788-7718c1ebffa2" (UID: "3e1ea464-c670-4943-8788-7718c1ebffa2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.869238 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" (UID: "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.870052 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ad3fda5-db71-4cad-b88c-ca0665f64b9d" (UID: "5ad3fda5-db71-4cad-b88c-ca0665f64b9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.871391 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" (UID: "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.871669 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-scripts" (OuterVolumeSpecName: "scripts") pod "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" (UID: "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.874799 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-kube-api-access-5zh9w" (OuterVolumeSpecName: "kube-api-access-5zh9w") pod "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" (UID: "4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e"). InnerVolumeSpecName "kube-api-access-5zh9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.875572 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-kube-api-access-t56rr" (OuterVolumeSpecName: "kube-api-access-t56rr") pod "5ad3fda5-db71-4cad-b88c-ca0665f64b9d" (UID: "5ad3fda5-db71-4cad-b88c-ca0665f64b9d"). InnerVolumeSpecName "kube-api-access-t56rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.881101 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1ea464-c670-4943-8788-7718c1ebffa2-kube-api-access-r4pxb" (OuterVolumeSpecName: "kube-api-access-r4pxb") pod "3e1ea464-c670-4943-8788-7718c1ebffa2" (UID: "3e1ea464-c670-4943-8788-7718c1ebffa2"). InnerVolumeSpecName "kube-api-access-r4pxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.893180 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s9d6v"] Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.969987 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t56rr\" (UniqueName: \"kubernetes.io/projected/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-kube-api-access-t56rr\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970027 4867 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970040 4867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970050 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zh9w\" (UniqueName: \"kubernetes.io/projected/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-kube-api-access-5zh9w\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970058 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e1ea464-c670-4943-8788-7718c1ebffa2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970066 4867 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970074 4867 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970081 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4pxb\" (UniqueName: \"kubernetes.io/projected/3e1ea464-c670-4943-8788-7718c1ebffa2-kube-api-access-r4pxb\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970089 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad3fda5-db71-4cad-b88c-ca0665f64b9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:58 crc kubenswrapper[4867]: I0126 11:35:58.970097 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.700129 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"54f34e91ba6a3ec2d12733518ebcd3134b5bc15776f981603240785199af3314"} Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.700550 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"4190a81849d3792aeb60d0d65157993af6423c39892a29da852380b51a805b00"} Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.700563 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"7e3102e53ad7404ede7b86dbf60d000e948abd4931c80c2d39877ac3af0a0350"} Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.700572 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"ad18c546383d29675038b63826363276ec5d2534af95f68e2c5e846055d12f14"} Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.703101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z9wf6" event={"ID":"054c5880-216a-4d98-bbc3-bc428d09bfe8","Type":"ContainerStarted","Data":"0252fce5ead19c0f5e16679f900c227b753cc720eab192b88836c3211860171c"} Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.708969 4867 generic.go:334] "Generic (PLEG): container finished" podID="0d501c78-55c7-4cda-b585-2b58737107aa" containerID="a9c354102fc6d6247e89bfbae0426a7614397d890a101cbc42fa3d0240e344b0" exitCode=0 Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.709070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s9d6v" event={"ID":"0d501c78-55c7-4cda-b585-2b58737107aa","Type":"ContainerDied","Data":"a9c354102fc6d6247e89bfbae0426a7614397d890a101cbc42fa3d0240e344b0"} Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.709106 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hbpxr-config-46f2c" Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.709106 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s9d6v" event={"ID":"0d501c78-55c7-4cda-b585-2b58737107aa","Type":"ContainerStarted","Data":"b69460539c93b74df9e32cb152e829b2de2093b531eeb13376c024a87397b9c5"} Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.719480 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-z9wf6" podStartSLOduration=6.3960987639999995 podStartE2EDuration="11.71946423s" podCreationTimestamp="2026-01-26 11:35:48 +0000 UTC" firstStartedPulling="2026-01-26 11:35:53.07637942 +0000 UTC m=+1102.774954330" lastFinishedPulling="2026-01-26 11:35:58.399744886 +0000 UTC m=+1108.098319796" observedRunningTime="2026-01-26 11:35:59.719353267 +0000 UTC m=+1109.417928177" watchObservedRunningTime="2026-01-26 11:35:59.71946423 +0000 UTC m=+1109.418039140" Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.726325 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zh4k7" Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.726390 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5d6gw" Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.860782 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hbpxr-config-46f2c"] Jan 26 11:35:59 crc kubenswrapper[4867]: I0126 11:35:59.870146 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hbpxr-config-46f2c"] Jan 26 11:36:00 crc kubenswrapper[4867]: I0126 11:36:00.573245 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" path="/var/lib/kubelet/pods/4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e/volumes" Jan 26 11:36:00 crc kubenswrapper[4867]: I0126 11:36:00.744519 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"103bed65406c9f02f4f04cb2f8813fb503bad5eb6618ea627b3f1a6c7c0b2520"} Jan 26 11:36:00 crc kubenswrapper[4867]: I0126 11:36:00.970461 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s9d6v" Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.109596 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d501c78-55c7-4cda-b585-2b58737107aa-operator-scripts\") pod \"0d501c78-55c7-4cda-b585-2b58737107aa\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.109785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlpqk\" (UniqueName: \"kubernetes.io/projected/0d501c78-55c7-4cda-b585-2b58737107aa-kube-api-access-jlpqk\") pod \"0d501c78-55c7-4cda-b585-2b58737107aa\" (UID: \"0d501c78-55c7-4cda-b585-2b58737107aa\") " Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.111507 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d501c78-55c7-4cda-b585-2b58737107aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d501c78-55c7-4cda-b585-2b58737107aa" (UID: "0d501c78-55c7-4cda-b585-2b58737107aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.114583 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d501c78-55c7-4cda-b585-2b58737107aa-kube-api-access-jlpqk" (OuterVolumeSpecName: "kube-api-access-jlpqk") pod "0d501c78-55c7-4cda-b585-2b58737107aa" (UID: "0d501c78-55c7-4cda-b585-2b58737107aa"). InnerVolumeSpecName "kube-api-access-jlpqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.211774 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlpqk\" (UniqueName: \"kubernetes.io/projected/0d501c78-55c7-4cda-b585-2b58737107aa-kube-api-access-jlpqk\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.211822 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d501c78-55c7-4cda-b585-2b58737107aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.764512 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s9d6v" event={"ID":"0d501c78-55c7-4cda-b585-2b58737107aa","Type":"ContainerDied","Data":"b69460539c93b74df9e32cb152e829b2de2093b531eeb13376c024a87397b9c5"} Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.764875 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69460539c93b74df9e32cb152e829b2de2093b531eeb13376c024a87397b9c5" Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.764809 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s9d6v" Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.770261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"4b31f3c01bb4a1987a1aabeb8d0116224daed520ea9d14b889358b714733513b"} Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.770312 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"a71581a2dde27e9a4005c91210d07b6a68e1828d0c812e87d192af809d9c08bc"} Jan 26 11:36:01 crc kubenswrapper[4867]: I0126 11:36:01.770324 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"4dc3b5481fbda3357936d14a8f053f3b805528e728f2d68f88718e8ea935b674"} Jan 26 11:36:03 crc kubenswrapper[4867]: I0126 11:36:03.395581 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s9d6v"] Jan 26 11:36:03 crc kubenswrapper[4867]: I0126 11:36:03.401580 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s9d6v"] Jan 26 11:36:04 crc kubenswrapper[4867]: I0126 11:36:04.574832 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d501c78-55c7-4cda-b585-2b58737107aa" path="/var/lib/kubelet/pods/0d501c78-55c7-4cda-b585-2b58737107aa/volumes" Jan 26 11:36:05 crc kubenswrapper[4867]: I0126 11:36:05.804542 4867 generic.go:334] "Generic (PLEG): container finished" podID="054c5880-216a-4d98-bbc3-bc428d09bfe8" containerID="0252fce5ead19c0f5e16679f900c227b753cc720eab192b88836c3211860171c" exitCode=0 Jan 26 11:36:05 crc kubenswrapper[4867]: I0126 11:36:05.804714 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z9wf6" event={"ID":"054c5880-216a-4d98-bbc3-bc428d09bfe8","Type":"ContainerDied","Data":"0252fce5ead19c0f5e16679f900c227b753cc720eab192b88836c3211860171c"} Jan 26 11:36:06 crc kubenswrapper[4867]: I0126 11:36:06.294382 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:36:06 crc kubenswrapper[4867]: I0126 11:36:06.294701 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:36:06 crc kubenswrapper[4867]: I0126 11:36:06.826071 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mjdws" event={"ID":"fa78acbb-8b93-4977-8ccf-fc79314b6f2e","Type":"ContainerStarted","Data":"1651081e3e71989b143ba29eb5321a628e4ecf50a4caa7578e4fa1cc3dd87ad3"} Jan 26 11:36:06 crc kubenswrapper[4867]: I0126 11:36:06.842633 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"8d8b7c0d83fb6d174fff79b2f4b4598ea0aa00ba2ac29d4314ee0eab52cbc09d"} Jan 26 11:36:06 crc kubenswrapper[4867]: I0126 11:36:06.842689 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"95e887c7507b3545aa01e8f8092d66aa0c04113c29d4c1f0a7fbd0dcf0d5e1ff"} Jan 26 11:36:06 crc kubenswrapper[4867]: I0126 11:36:06.842705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"f230802f8102c18b7e9f006e4e324926739f175205aa73cf667999be10b3f698"} Jan 26 11:36:06 crc kubenswrapper[4867]: I0126 11:36:06.856866 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mjdws" podStartSLOduration=2.151589777 podStartE2EDuration="31.856841792s" podCreationTimestamp="2026-01-26 11:35:35 +0000 UTC" firstStartedPulling="2026-01-26 11:35:36.437883556 +0000 UTC m=+1086.136458466" lastFinishedPulling="2026-01-26 11:36:06.143135571 +0000 UTC m=+1115.841710481" observedRunningTime="2026-01-26 11:36:06.851716255 +0000 UTC m=+1116.550291195" watchObservedRunningTime="2026-01-26 11:36:06.856841792 +0000 UTC m=+1116.555416702" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.121478 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.219692 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-config-data\") pod \"054c5880-216a-4d98-bbc3-bc428d09bfe8\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.219987 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5bjt\" (UniqueName: \"kubernetes.io/projected/054c5880-216a-4d98-bbc3-bc428d09bfe8-kube-api-access-k5bjt\") pod \"054c5880-216a-4d98-bbc3-bc428d09bfe8\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.220086 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-combined-ca-bundle\") pod \"054c5880-216a-4d98-bbc3-bc428d09bfe8\" (UID: \"054c5880-216a-4d98-bbc3-bc428d09bfe8\") " Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.230446 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054c5880-216a-4d98-bbc3-bc428d09bfe8-kube-api-access-k5bjt" (OuterVolumeSpecName: "kube-api-access-k5bjt") pod "054c5880-216a-4d98-bbc3-bc428d09bfe8" (UID: "054c5880-216a-4d98-bbc3-bc428d09bfe8"). InnerVolumeSpecName "kube-api-access-k5bjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.246022 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "054c5880-216a-4d98-bbc3-bc428d09bfe8" (UID: "054c5880-216a-4d98-bbc3-bc428d09bfe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.266359 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-config-data" (OuterVolumeSpecName: "config-data") pod "054c5880-216a-4d98-bbc3-bc428d09bfe8" (UID: "054c5880-216a-4d98-bbc3-bc428d09bfe8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.321709 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.321747 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/054c5880-216a-4d98-bbc3-bc428d09bfe8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.321756 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5bjt\" (UniqueName: \"kubernetes.io/projected/054c5880-216a-4d98-bbc3-bc428d09bfe8-kube-api-access-k5bjt\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.869454 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"5178c0e0048dc29635fa39454f5cf99036e4be4f84deec83e362b891deaf63cd"} Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.871008 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z9wf6" event={"ID":"054c5880-216a-4d98-bbc3-bc428d09bfe8","Type":"ContainerDied","Data":"8500214f1eddaf00317a0e1c018690cb6c363b79e318fe1f321abba08eb5f884"} Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.871031 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8500214f1eddaf00317a0e1c018690cb6c363b79e318fe1f321abba08eb5f884" Jan 26 11:36:07 crc kubenswrapper[4867]: I0126 11:36:07.871089 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z9wf6" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099005 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmtrd"] Jan 26 11:36:08 crc kubenswrapper[4867]: E0126 11:36:08.099469 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d7d8be-7aac-4f1c-95a7-25021c4d24ae" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099490 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d7d8be-7aac-4f1c-95a7-25021c4d24ae" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: E0126 11:36:08.099509 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad3fda5-db71-4cad-b88c-ca0665f64b9d" containerName="mariadb-database-create" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099518 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad3fda5-db71-4cad-b88c-ca0665f64b9d" containerName="mariadb-database-create" Jan 26 11:36:08 crc kubenswrapper[4867]: E0126 11:36:08.099537 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" containerName="ovn-config" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099545 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" containerName="ovn-config" Jan 26 11:36:08 crc kubenswrapper[4867]: E0126 11:36:08.099567 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054c5880-216a-4d98-bbc3-bc428d09bfe8" containerName="keystone-db-sync" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099577 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="054c5880-216a-4d98-bbc3-bc428d09bfe8" containerName="keystone-db-sync" Jan 26 11:36:08 crc kubenswrapper[4867]: E0126 11:36:08.099587 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255d1723-a5b7-4030-b2a0-4b28ee758717" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099596 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="255d1723-a5b7-4030-b2a0-4b28ee758717" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: E0126 11:36:08.099608 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d501c78-55c7-4cda-b585-2b58737107aa" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099616 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d501c78-55c7-4cda-b585-2b58737107aa" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: E0126 11:36:08.099630 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1ea464-c670-4943-8788-7718c1ebffa2" containerName="mariadb-database-create" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099638 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1ea464-c670-4943-8788-7718c1ebffa2" containerName="mariadb-database-create" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099827 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad3fda5-db71-4cad-b88c-ca0665f64b9d" containerName="mariadb-database-create" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099845 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d7d8be-7aac-4f1c-95a7-25021c4d24ae" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099864 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="054c5880-216a-4d98-bbc3-bc428d09bfe8" containerName="keystone-db-sync" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099879 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d501c78-55c7-4cda-b585-2b58737107aa" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099891 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="255d1723-a5b7-4030-b2a0-4b28ee758717" containerName="mariadb-account-create-update" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099904 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1ea464-c670-4943-8788-7718c1ebffa2" containerName="mariadb-database-create" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.099916 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa44a3e-a4f9-4bdc-a0a9-eadf3a269b1e" containerName="ovn-config" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.100966 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.118056 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmtrd"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.144716 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qqwcf"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.146133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.159239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.159701 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.159814 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.159485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.160544 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r6w6v" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.182757 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qqwcf"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.236271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-config\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.236329 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.236402 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlg7\" (UniqueName: \"kubernetes.io/projected/71f33dd3-8154-43e2-8047-fdb18dce9330-kube-api-access-bmlg7\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.236451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.236497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.300778 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-n8k7h"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.301857 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.310125 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-n8k7h"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338206 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-config\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338259 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338293 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlg7\" (UniqueName: \"kubernetes.io/projected/71f33dd3-8154-43e2-8047-fdb18dce9330-kube-api-access-bmlg7\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338335 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-scripts\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-config-data\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338380 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d82r\" (UniqueName: \"kubernetes.io/projected/493bdea3-8931-4319-a465-16c9a4329881-kube-api-access-7d82r\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338436 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-credential-keys\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338520 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-combined-ca-bundle\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.338541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-fernet-keys\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.339399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-config\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.339908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.340673 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.341164 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.372124 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.377376 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.393362 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.438797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlg7\" (UniqueName: \"kubernetes.io/projected/71f33dd3-8154-43e2-8047-fdb18dce9330-kube-api-access-bmlg7\") pod \"dnsmasq-dns-5c9d85d47c-wmtrd\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.441653 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.442588 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.444012 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.444329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-scripts\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.444446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-config-data\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.444539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-operator-scripts\") pod \"ironic-db-create-n8k7h\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.444643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d82r\" (UniqueName: \"kubernetes.io/projected/493bdea3-8931-4319-a465-16c9a4329881-kube-api-access-7d82r\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.445410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-credential-keys\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.445464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-combined-ca-bundle\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.445489 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdnd\" (UniqueName: \"kubernetes.io/projected/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-kube-api-access-jfdnd\") pod \"ironic-db-create-n8k7h\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.445519 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-fernet-keys\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.461759 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-fernet-keys\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.462995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-config-data\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.476126 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-scripts\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.477935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-combined-ca-bundle\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.492832 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-credential-keys\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.513497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d82r\" (UniqueName: \"kubernetes.io/projected/493bdea3-8931-4319-a465-16c9a4329881-kube-api-access-7d82r\") pod \"keystone-bootstrap-qqwcf\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-config-data\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575417 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-run-httpd\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575448 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdnd\" (UniqueName: \"kubernetes.io/projected/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-kube-api-access-jfdnd\") pod \"ironic-db-create-n8k7h\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575478 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575516 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzkb\" (UniqueName: \"kubernetes.io/projected/b588da78-7e07-438f-9612-e600ca38ab04-kube-api-access-pjzkb\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-operator-scripts\") pod \"ironic-db-create-n8k7h\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575650 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-log-httpd\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575685 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.575701 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-scripts\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.576795 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-operator-scripts\") pod \"ironic-db-create-n8k7h\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.627295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdnd\" (UniqueName: \"kubernetes.io/projected/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-kube-api-access-jfdnd\") pod \"ironic-db-create-n8k7h\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.632090 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-de23-account-create-update-tsmbv"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.633880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.636504 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.637496 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.650329 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2kgmw"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.652518 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.667860 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6csr9" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.668101 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.668157 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.676480 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-log-httpd\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.676539 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.676563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-scripts\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.676604 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-config-data\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.676635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-run-httpd\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.676662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.676699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzkb\" (UniqueName: \"kubernetes.io/projected/b588da78-7e07-438f-9612-e600ca38ab04-kube-api-access-pjzkb\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.677981 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-run-httpd\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.679043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-log-httpd\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.683025 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l4jjx"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.683441 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-scripts\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.683953 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.684242 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.685862 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.686331 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-config-data\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.699908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.702400 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-de23-account-create-update-tsmbv"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.702617 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzkb\" (UniqueName: \"kubernetes.io/projected/b588da78-7e07-438f-9612-e600ca38ab04-kube-api-access-pjzkb\") pod \"ceilometer-0\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.731282 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2kgmw"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.772759 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.773996 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l4jjx"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.782717 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-v7xht"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.784174 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.786619 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqsc\" (UniqueName: \"kubernetes.io/projected/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-kube-api-access-5jqsc\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.786675 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgjd\" (UniqueName: \"kubernetes.io/projected/933e31ea-ff1a-4883-a82a-92893ca7d7b0-kube-api-access-drgjd\") pod \"ironic-de23-account-create-update-tsmbv\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.786760 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-db-sync-config-data\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.786853 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-config-data\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.786915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-combined-ca-bundle\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.786986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-etc-machine-id\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.787082 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933e31ea-ff1a-4883-a82a-92893ca7d7b0-operator-scripts\") pod \"ironic-de23-account-create-update-tsmbv\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.787117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-scripts\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.790994 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-m72k2"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.791548 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cb22d" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.791935 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.792434 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.795928 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.796084 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v7xht"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.797055 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.797055 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ndxbr" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.803985 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m72k2"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.821062 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4whss"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.824914 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4whss" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.829559 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9spjg" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.829817 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.829949 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.836266 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmtrd"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.847021 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4whss"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.856161 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-9qkhh"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.858620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.872344 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-9qkhh"] Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.878347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqsc\" (UniqueName: \"kubernetes.io/projected/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-kube-api-access-5jqsc\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888386 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drgjd\" (UniqueName: \"kubernetes.io/projected/933e31ea-ff1a-4883-a82a-92893ca7d7b0-kube-api-access-drgjd\") pod \"ironic-de23-account-create-update-tsmbv\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx8d5\" (UniqueName: \"kubernetes.io/projected/829cd764-d506-45fb-a1d6-d45504d0b20c-kube-api-access-rx8d5\") pod \"root-account-create-update-l4jjx\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-db-sync-config-data\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829cd764-d506-45fb-a1d6-d45504d0b20c-operator-scripts\") pod \"root-account-create-update-l4jjx\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888490 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-combined-ca-bundle\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888513 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-config-data\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-combined-ca-bundle\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888565 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22rt\" (UniqueName: \"kubernetes.io/projected/0ee786d6-3c88-4374-a028-3a3c83b30fec-kube-api-access-l22rt\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-etc-machine-id\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.888726 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-etc-machine-id\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.889527 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933e31ea-ff1a-4883-a82a-92893ca7d7b0-operator-scripts\") pod \"ironic-de23-account-create-update-tsmbv\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.889560 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-db-sync-config-data\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.889586 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-scripts\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.892171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-db-sync-config-data\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.892177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933e31ea-ff1a-4883-a82a-92893ca7d7b0-operator-scripts\") pod \"ironic-de23-account-create-update-tsmbv\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.898902 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-scripts\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.899130 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-combined-ca-bundle\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.899894 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-config-data\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.907087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqsc\" (UniqueName: \"kubernetes.io/projected/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-kube-api-access-5jqsc\") pod \"cinder-db-sync-2kgmw\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.909711 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drgjd\" (UniqueName: \"kubernetes.io/projected/933e31ea-ff1a-4883-a82a-92893ca7d7b0-kube-api-access-drgjd\") pod \"ironic-de23-account-create-update-tsmbv\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.979388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.985761 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991052 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991117 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-db-sync-config-data\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991152 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-combined-ca-bundle\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991213 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-config\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991339 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8v9n\" (UniqueName: \"kubernetes.io/projected/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-kube-api-access-x8v9n\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-combined-ca-bundle\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991436 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx8d5\" (UniqueName: \"kubernetes.io/projected/829cd764-d506-45fb-a1d6-d45504d0b20c-kube-api-access-rx8d5\") pod \"root-account-create-update-l4jjx\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991470 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-config-data\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991501 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-logs\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991552 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjpxr\" (UniqueName: \"kubernetes.io/projected/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-kube-api-access-gjpxr\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829cd764-d506-45fb-a1d6-d45504d0b20c-operator-scripts\") pod \"root-account-create-update-l4jjx\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-combined-ca-bundle\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-config\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-scripts\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991688 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsd9z\" (UniqueName: \"kubernetes.io/projected/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-kube-api-access-fsd9z\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.991717 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l22rt\" (UniqueName: \"kubernetes.io/projected/0ee786d6-3c88-4374-a028-3a3c83b30fec-kube-api-access-l22rt\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:08 crc kubenswrapper[4867]: I0126 11:36:08.992418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829cd764-d506-45fb-a1d6-d45504d0b20c-operator-scripts\") pod \"root-account-create-update-l4jjx\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.001840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-db-sync-config-data\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.001923 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-combined-ca-bundle\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.009684 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx8d5\" (UniqueName: \"kubernetes.io/projected/829cd764-d506-45fb-a1d6-d45504d0b20c-kube-api-access-rx8d5\") pod \"root-account-create-update-l4jjx\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.011802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22rt\" (UniqueName: \"kubernetes.io/projected/0ee786d6-3c88-4374-a028-3a3c83b30fec-kube-api-access-l22rt\") pod \"barbican-db-sync-v7xht\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092468 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjpxr\" (UniqueName: \"kubernetes.io/projected/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-kube-api-access-gjpxr\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-config\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092547 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-scripts\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsd9z\" (UniqueName: \"kubernetes.io/projected/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-kube-api-access-fsd9z\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092659 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-combined-ca-bundle\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-config\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092708 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8v9n\" (UniqueName: \"kubernetes.io/projected/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-kube-api-access-x8v9n\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-combined-ca-bundle\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092763 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-config-data\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-logs\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.092798 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.093636 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.094092 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.094275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.094300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-config\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.094536 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-logs\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.096824 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-scripts\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.097091 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-config\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.097198 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-combined-ca-bundle\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.100721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-config-data\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.102939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-combined-ca-bundle\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.109415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v7xht" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.110630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsd9z\" (UniqueName: \"kubernetes.io/projected/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-kube-api-access-fsd9z\") pod \"placement-db-sync-4whss\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.110880 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjpxr\" (UniqueName: \"kubernetes.io/projected/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-kube-api-access-gjpxr\") pod \"dnsmasq-dns-6ffb94d8ff-9qkhh\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.111315 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8v9n\" (UniqueName: \"kubernetes.io/projected/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-kube-api-access-x8v9n\") pod \"neutron-db-sync-m72k2\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.117431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m72k2" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.140525 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4whss" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.181868 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:09 crc kubenswrapper[4867]: I0126 11:36:09.309260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.230339 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmtrd"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.288039 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.352973 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4whss"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.492594 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qqwcf"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.634411 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v7xht"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.665287 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2kgmw"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.673583 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l4jjx"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.686851 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-9qkhh"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.793548 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-n8k7h"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.802673 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.811111 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-de23-account-create-update-tsmbv"] Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.819671 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m72k2"] Jan 26 11:36:10 crc kubenswrapper[4867]: W0126 11:36:10.855041 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75e847de_1c0c_4ac3_b7ff_c41bfa7a6534.slice/crio-7c047cbe90862887dd0cf6ad99db2b617c5afb37325fd02e965918a054e48ea3 WatchSource:0}: Error finding container 7c047cbe90862887dd0cf6ad99db2b617c5afb37325fd02e965918a054e48ea3: Status 404 returned error can't find the container with id 7c047cbe90862887dd0cf6ad99db2b617c5afb37325fd02e965918a054e48ea3 Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.968317 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" event={"ID":"71f33dd3-8154-43e2-8047-fdb18dce9330","Type":"ContainerStarted","Data":"01ba5a3eec7302af36474e8cfa115b7ad2340462a378d510bb23416f9938d0e5"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.969474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4jjx" event={"ID":"829cd764-d506-45fb-a1d6-d45504d0b20c","Type":"ContainerStarted","Data":"4b14050e0abb2cb2b81b94eeece747a21e4f2921c2fe48193d0cfc204601caca"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.970474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" event={"ID":"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33","Type":"ContainerStarted","Data":"2d2e9f4125c8e963df057d192c5da6c77d3203825db2be23dadf66ca5afef593"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.971505 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqwcf" event={"ID":"493bdea3-8931-4319-a465-16c9a4329881","Type":"ContainerStarted","Data":"bf4d7f7730270b339e4d2de8bf88ad3f315cb80064e5bb4db2d470d5e3c5d187"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.972475 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-n8k7h" event={"ID":"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb","Type":"ContainerStarted","Data":"9269840660b074e6bc009f9a71cc652a52869b07aa10e2ce7ed006596dac48cd"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.974488 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-de23-account-create-update-tsmbv" event={"ID":"933e31ea-ff1a-4883-a82a-92893ca7d7b0","Type":"ContainerStarted","Data":"b6498e038c27dc088ecf15fbb35c559d144a400b9d7dd489b34de9fae1f3e558"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.982728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerStarted","Data":"5367bd45b9f46b1df8afeeb7840ac87b6e9809c72f79068cafbb0685a0e544ba"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.983910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v7xht" event={"ID":"0ee786d6-3c88-4374-a028-3a3c83b30fec","Type":"ContainerStarted","Data":"886e9c3d275ab2482f23c8480433a09edc62f94324e6a7822512887f1f748c50"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.985161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m72k2" event={"ID":"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534","Type":"ContainerStarted","Data":"7c047cbe90862887dd0cf6ad99db2b617c5afb37325fd02e965918a054e48ea3"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.987990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4whss" event={"ID":"5c210d27-ca0b-4d51-b462-bc5adf4dbe43","Type":"ContainerStarted","Data":"07f01e4a959767905976a1905050cb7e13d80b149ea5f85aa89ae19ed1d436a7"} Jan 26 11:36:10 crc kubenswrapper[4867]: I0126 11:36:10.989505 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kgmw" event={"ID":"d28fe2ce-f40e-4f37-9d27-57d14376fc5d","Type":"ContainerStarted","Data":"4662d7de3602cec63da0cc992541f679d5537213f3fd5d5269e05c6727c601c2"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.026593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"05c7b9d6b1ca82cd4b289ef50e767fa5e90e7c36e79ae510fb9e2ca7208e3860"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.026899 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"ef078b5407c3e3d5db520188374d3325984d6af8c7277f677bf71e62a5d602b6"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.029704 4867 generic.go:334] "Generic (PLEG): container finished" podID="829cd764-d506-45fb-a1d6-d45504d0b20c" containerID="ed7ca6535de4bc6499acd3d01a20485ae06c236f4390eadaedc3a070ccae33a0" exitCode=0 Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.029771 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4jjx" event={"ID":"829cd764-d506-45fb-a1d6-d45504d0b20c","Type":"ContainerDied","Data":"ed7ca6535de4bc6499acd3d01a20485ae06c236f4390eadaedc3a070ccae33a0"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.034529 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqwcf" event={"ID":"493bdea3-8931-4319-a465-16c9a4329881","Type":"ContainerStarted","Data":"b4d3a8a212993515d3f2e795ce5bc67aab092824e8fb8d7763afdd105bfa535d"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.039332 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerID="cd01462e889fbb7a7f90b811f07f418d147121a745c568b3de4b8778ad6cafe6" exitCode=0 Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.039401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" event={"ID":"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33","Type":"ContainerDied","Data":"cd01462e889fbb7a7f90b811f07f418d147121a745c568b3de4b8778ad6cafe6"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.051680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m72k2" event={"ID":"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534","Type":"ContainerStarted","Data":"09ce249d9ce50194ab4edc8047525c8a71645522228c8d402991237724e02ed1"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.057477 4867 generic.go:334] "Generic (PLEG): container finished" podID="e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb" containerID="0fd2f7fc2036f615e2928d0ebb3b35d09523786b8b0ae31c25c5b1dfb29b3981" exitCode=0 Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.057499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-n8k7h" event={"ID":"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb","Type":"ContainerDied","Data":"0fd2f7fc2036f615e2928d0ebb3b35d09523786b8b0ae31c25c5b1dfb29b3981"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.060978 4867 generic.go:334] "Generic (PLEG): container finished" podID="71f33dd3-8154-43e2-8047-fdb18dce9330" containerID="09b7d7c6c3439e0fbfbc073cd01356af0e65e87c2fbb853aeecd230f300963f0" exitCode=0 Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.061156 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" event={"ID":"71f33dd3-8154-43e2-8047-fdb18dce9330","Type":"ContainerDied","Data":"09b7d7c6c3439e0fbfbc073cd01356af0e65e87c2fbb853aeecd230f300963f0"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.066444 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-de23-account-create-update-tsmbv" event={"ID":"933e31ea-ff1a-4883-a82a-92893ca7d7b0","Type":"ContainerStarted","Data":"5bb175e7fb2a6d045398e1893e32decd8e178b7615ca1824a71b342b08fbd2a2"} Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.081654 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qqwcf" podStartSLOduration=4.081635705 podStartE2EDuration="4.081635705s" podCreationTimestamp="2026-01-26 11:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:12.074649319 +0000 UTC m=+1121.773224229" watchObservedRunningTime="2026-01-26 11:36:12.081635705 +0000 UTC m=+1121.780210615" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.212783 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-m72k2" podStartSLOduration=4.212767977 podStartE2EDuration="4.212767977s" podCreationTimestamp="2026-01-26 11:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:12.20726865 +0000 UTC m=+1121.905843560" watchObservedRunningTime="2026-01-26 11:36:12.212767977 +0000 UTC m=+1121.911342887" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.534972 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.679408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmlg7\" (UniqueName: \"kubernetes.io/projected/71f33dd3-8154-43e2-8047-fdb18dce9330-kube-api-access-bmlg7\") pod \"71f33dd3-8154-43e2-8047-fdb18dce9330\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.679627 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-sb\") pod \"71f33dd3-8154-43e2-8047-fdb18dce9330\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.679688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-dns-svc\") pod \"71f33dd3-8154-43e2-8047-fdb18dce9330\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.679744 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-config\") pod \"71f33dd3-8154-43e2-8047-fdb18dce9330\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.679794 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-nb\") pod \"71f33dd3-8154-43e2-8047-fdb18dce9330\" (UID: \"71f33dd3-8154-43e2-8047-fdb18dce9330\") " Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.710322 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71f33dd3-8154-43e2-8047-fdb18dce9330" (UID: "71f33dd3-8154-43e2-8047-fdb18dce9330"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.710961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f33dd3-8154-43e2-8047-fdb18dce9330-kube-api-access-bmlg7" (OuterVolumeSpecName: "kube-api-access-bmlg7") pod "71f33dd3-8154-43e2-8047-fdb18dce9330" (UID: "71f33dd3-8154-43e2-8047-fdb18dce9330"). InnerVolumeSpecName "kube-api-access-bmlg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.722365 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71f33dd3-8154-43e2-8047-fdb18dce9330" (UID: "71f33dd3-8154-43e2-8047-fdb18dce9330"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.724866 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-config" (OuterVolumeSpecName: "config") pod "71f33dd3-8154-43e2-8047-fdb18dce9330" (UID: "71f33dd3-8154-43e2-8047-fdb18dce9330"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.725652 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71f33dd3-8154-43e2-8047-fdb18dce9330" (UID: "71f33dd3-8154-43e2-8047-fdb18dce9330"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.783206 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmlg7\" (UniqueName: \"kubernetes.io/projected/71f33dd3-8154-43e2-8047-fdb18dce9330-kube-api-access-bmlg7\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.783294 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.783312 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.783325 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:12 crc kubenswrapper[4867]: I0126 11:36:12.783339 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71f33dd3-8154-43e2-8047-fdb18dce9330-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.079671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" event={"ID":"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33","Type":"ContainerStarted","Data":"792662b1849d4a3a5100709caf1de5564e40e75da949cecec6e4c47f6fd20a9a"} Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.079868 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.085605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" event={"ID":"71f33dd3-8154-43e2-8047-fdb18dce9330","Type":"ContainerDied","Data":"01ba5a3eec7302af36474e8cfa115b7ad2340462a378d510bb23416f9938d0e5"} Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.085655 4867 scope.go:117] "RemoveContainer" containerID="09b7d7c6c3439e0fbfbc073cd01356af0e65e87c2fbb853aeecd230f300963f0" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.085792 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-wmtrd" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.110556 4867 generic.go:334] "Generic (PLEG): container finished" podID="933e31ea-ff1a-4883-a82a-92893ca7d7b0" containerID="5bb175e7fb2a6d045398e1893e32decd8e178b7615ca1824a71b342b08fbd2a2" exitCode=0 Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.110641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-de23-account-create-update-tsmbv" event={"ID":"933e31ea-ff1a-4883-a82a-92893ca7d7b0","Type":"ContainerDied","Data":"5bb175e7fb2a6d045398e1893e32decd8e178b7615ca1824a71b342b08fbd2a2"} Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.113396 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" podStartSLOduration=5.113377389 podStartE2EDuration="5.113377389s" podCreationTimestamp="2026-01-26 11:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:13.106117845 +0000 UTC m=+1122.804692765" watchObservedRunningTime="2026-01-26 11:36:13.113377389 +0000 UTC m=+1122.811952299" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.126089 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3f128154-6619-4556-be1b-73e44d4f7df1","Type":"ContainerStarted","Data":"231b083ce629e2ea85c5f991b4d0d430f39a180e47a387ebe72a066d02593d31"} Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.181509 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=48.283845743 podStartE2EDuration="1m1.181490348s" podCreationTimestamp="2026-01-26 11:35:12 +0000 UTC" firstStartedPulling="2026-01-26 11:35:53.244356096 +0000 UTC m=+1102.942931006" lastFinishedPulling="2026-01-26 11:36:06.142000701 +0000 UTC m=+1115.840575611" observedRunningTime="2026-01-26 11:36:13.166104347 +0000 UTC m=+1122.864679257" watchObservedRunningTime="2026-01-26 11:36:13.181490348 +0000 UTC m=+1122.880065258" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.279672 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmtrd"] Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.300899 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-wmtrd"] Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.547762 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-9qkhh"] Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.576068 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mfszq"] Jan 26 11:36:13 crc kubenswrapper[4867]: E0126 11:36:13.576632 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f33dd3-8154-43e2-8047-fdb18dce9330" containerName="init" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.576660 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f33dd3-8154-43e2-8047-fdb18dce9330" containerName="init" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.576864 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f33dd3-8154-43e2-8047-fdb18dce9330" containerName="init" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.578012 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.581266 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.599811 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mfszq"] Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.759155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdzb\" (UniqueName: \"kubernetes.io/projected/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-kube-api-access-8hdzb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.759197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.759236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.759267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-config\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.759302 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.759369 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.822411 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.856455 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.860946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.861033 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.861375 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdzb\" (UniqueName: \"kubernetes.io/projected/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-kube-api-access-8hdzb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.861404 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.861434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.861468 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-config\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.862456 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-config\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.863373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.864087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.865106 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.865770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.867406 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.891325 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdzb\" (UniqueName: \"kubernetes.io/projected/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-kube-api-access-8hdzb\") pod \"dnsmasq-dns-cf78879c9-mfszq\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.935675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.962830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933e31ea-ff1a-4883-a82a-92893ca7d7b0-operator-scripts\") pod \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.962886 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-operator-scripts\") pod \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.962923 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829cd764-d506-45fb-a1d6-d45504d0b20c-operator-scripts\") pod \"829cd764-d506-45fb-a1d6-d45504d0b20c\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.962952 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drgjd\" (UniqueName: \"kubernetes.io/projected/933e31ea-ff1a-4883-a82a-92893ca7d7b0-kube-api-access-drgjd\") pod \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\" (UID: \"933e31ea-ff1a-4883-a82a-92893ca7d7b0\") " Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.963263 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx8d5\" (UniqueName: \"kubernetes.io/projected/829cd764-d506-45fb-a1d6-d45504d0b20c-kube-api-access-rx8d5\") pod \"829cd764-d506-45fb-a1d6-d45504d0b20c\" (UID: \"829cd764-d506-45fb-a1d6-d45504d0b20c\") " Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.963386 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfdnd\" (UniqueName: \"kubernetes.io/projected/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-kube-api-access-jfdnd\") pod \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\" (UID: \"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb\") " Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.963680 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933e31ea-ff1a-4883-a82a-92893ca7d7b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "933e31ea-ff1a-4883-a82a-92893ca7d7b0" (UID: "933e31ea-ff1a-4883-a82a-92893ca7d7b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.964102 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829cd764-d506-45fb-a1d6-d45504d0b20c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "829cd764-d506-45fb-a1d6-d45504d0b20c" (UID: "829cd764-d506-45fb-a1d6-d45504d0b20c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.964524 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb" (UID: "e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.969470 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829cd764-d506-45fb-a1d6-d45504d0b20c-kube-api-access-rx8d5" (OuterVolumeSpecName: "kube-api-access-rx8d5") pod "829cd764-d506-45fb-a1d6-d45504d0b20c" (UID: "829cd764-d506-45fb-a1d6-d45504d0b20c"). InnerVolumeSpecName "kube-api-access-rx8d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.971400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-kube-api-access-jfdnd" (OuterVolumeSpecName: "kube-api-access-jfdnd") pod "e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb" (UID: "e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb"). InnerVolumeSpecName "kube-api-access-jfdnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:13 crc kubenswrapper[4867]: I0126 11:36:13.971478 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933e31ea-ff1a-4883-a82a-92893ca7d7b0-kube-api-access-drgjd" (OuterVolumeSpecName: "kube-api-access-drgjd") pod "933e31ea-ff1a-4883-a82a-92893ca7d7b0" (UID: "933e31ea-ff1a-4883-a82a-92893ca7d7b0"). InnerVolumeSpecName "kube-api-access-drgjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.065137 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfdnd\" (UniqueName: \"kubernetes.io/projected/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-kube-api-access-jfdnd\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.065460 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/933e31ea-ff1a-4883-a82a-92893ca7d7b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.065471 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.065481 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/829cd764-d506-45fb-a1d6-d45504d0b20c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.065491 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drgjd\" (UniqueName: \"kubernetes.io/projected/933e31ea-ff1a-4883-a82a-92893ca7d7b0-kube-api-access-drgjd\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.065499 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx8d5\" (UniqueName: \"kubernetes.io/projected/829cd764-d506-45fb-a1d6-d45504d0b20c-kube-api-access-rx8d5\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.137267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-n8k7h" event={"ID":"e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb","Type":"ContainerDied","Data":"9269840660b074e6bc009f9a71cc652a52869b07aa10e2ce7ed006596dac48cd"} Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.137310 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9269840660b074e6bc009f9a71cc652a52869b07aa10e2ce7ed006596dac48cd" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.137308 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-n8k7h" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.140906 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-de23-account-create-update-tsmbv" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.142783 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-de23-account-create-update-tsmbv" event={"ID":"933e31ea-ff1a-4883-a82a-92893ca7d7b0","Type":"ContainerDied","Data":"b6498e038c27dc088ecf15fbb35c559d144a400b9d7dd489b34de9fae1f3e558"} Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.142820 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6498e038c27dc088ecf15fbb35c559d144a400b9d7dd489b34de9fae1f3e558" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.146303 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l4jjx" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.149202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l4jjx" event={"ID":"829cd764-d506-45fb-a1d6-d45504d0b20c","Type":"ContainerDied","Data":"4b14050e0abb2cb2b81b94eeece747a21e4f2921c2fe48193d0cfc204601caca"} Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.149247 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b14050e0abb2cb2b81b94eeece747a21e4f2921c2fe48193d0cfc204601caca" Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.413490 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mfszq"] Jan 26 11:36:14 crc kubenswrapper[4867]: I0126 11:36:14.573992 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f33dd3-8154-43e2-8047-fdb18dce9330" path="/var/lib/kubelet/pods/71f33dd3-8154-43e2-8047-fdb18dce9330/volumes" Jan 26 11:36:15 crc kubenswrapper[4867]: I0126 11:36:15.160800 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerName="dnsmasq-dns" containerID="cri-o://792662b1849d4a3a5100709caf1de5564e40e75da949cecec6e4c47f6fd20a9a" gracePeriod=10 Jan 26 11:36:16 crc kubenswrapper[4867]: I0126 11:36:16.171883 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerID="792662b1849d4a3a5100709caf1de5564e40e75da949cecec6e4c47f6fd20a9a" exitCode=0 Jan 26 11:36:16 crc kubenswrapper[4867]: I0126 11:36:16.171971 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" event={"ID":"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33","Type":"ContainerDied","Data":"792662b1849d4a3a5100709caf1de5564e40e75da949cecec6e4c47f6fd20a9a"} Jan 26 11:36:16 crc kubenswrapper[4867]: W0126 11:36:16.988403 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4cc6b95_bfa3_4814_ba91_92a2ffac1ce3.slice/crio-a8f70966986086bd2e630bc803b115f94046241488c5f390806a7862d72d5189 WatchSource:0}: Error finding container a8f70966986086bd2e630bc803b115f94046241488c5f390806a7862d72d5189: Status 404 returned error can't find the container with id a8f70966986086bd2e630bc803b115f94046241488c5f390806a7862d72d5189 Jan 26 11:36:17 crc kubenswrapper[4867]: I0126 11:36:17.215091 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" event={"ID":"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3","Type":"ContainerStarted","Data":"a8f70966986086bd2e630bc803b115f94046241488c5f390806a7862d72d5189"} Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.227316 4867 generic.go:334] "Generic (PLEG): container finished" podID="493bdea3-8931-4319-a465-16c9a4329881" containerID="b4d3a8a212993515d3f2e795ce5bc67aab092824e8fb8d7763afdd105bfa535d" exitCode=0 Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.227376 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqwcf" event={"ID":"493bdea3-8931-4319-a465-16c9a4329881","Type":"ContainerDied","Data":"b4d3a8a212993515d3f2e795ce5bc67aab092824e8fb8d7763afdd105bfa535d"} Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.664453 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-h7r88"] Jan 26 11:36:18 crc kubenswrapper[4867]: E0126 11:36:18.664808 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb" containerName="mariadb-database-create" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.664821 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb" containerName="mariadb-database-create" Jan 26 11:36:18 crc kubenswrapper[4867]: E0126 11:36:18.664853 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829cd764-d506-45fb-a1d6-d45504d0b20c" containerName="mariadb-account-create-update" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.664860 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="829cd764-d506-45fb-a1d6-d45504d0b20c" containerName="mariadb-account-create-update" Jan 26 11:36:18 crc kubenswrapper[4867]: E0126 11:36:18.664874 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933e31ea-ff1a-4883-a82a-92893ca7d7b0" containerName="mariadb-account-create-update" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.664883 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="933e31ea-ff1a-4883-a82a-92893ca7d7b0" containerName="mariadb-account-create-update" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.665029 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="829cd764-d506-45fb-a1d6-d45504d0b20c" containerName="mariadb-account-create-update" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.665043 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb" containerName="mariadb-database-create" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.665054 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="933e31ea-ff1a-4883-a82a-92893ca7d7b0" containerName="mariadb-account-create-update" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.665861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.668466 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.668591 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.670174 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-xdv7v" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.676289 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-h7r88"] Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.771370 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3de6837e-5965-48ce-9967-2d259829ad4a-etc-podinfo\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.771480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjcr\" (UniqueName: \"kubernetes.io/projected/3de6837e-5965-48ce-9967-2d259829ad4a-kube-api-access-qxjcr\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.771531 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-combined-ca-bundle\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.771618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3de6837e-5965-48ce-9967-2d259829ad4a-config-data-merged\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.771670 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-scripts\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.771697 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-config-data\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.873995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3de6837e-5965-48ce-9967-2d259829ad4a-config-data-merged\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.874056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-scripts\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.874076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-config-data\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.874133 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3de6837e-5965-48ce-9967-2d259829ad4a-etc-podinfo\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.874193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjcr\" (UniqueName: \"kubernetes.io/projected/3de6837e-5965-48ce-9967-2d259829ad4a-kube-api-access-qxjcr\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.874239 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-combined-ca-bundle\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.875694 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3de6837e-5965-48ce-9967-2d259829ad4a-config-data-merged\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.878846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-config-data\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.879265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3de6837e-5965-48ce-9967-2d259829ad4a-etc-podinfo\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.883073 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-scripts\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.891868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-combined-ca-bundle\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:18 crc kubenswrapper[4867]: I0126 11:36:18.897688 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjcr\" (UniqueName: \"kubernetes.io/projected/3de6837e-5965-48ce-9967-2d259829ad4a-kube-api-access-qxjcr\") pod \"ironic-db-sync-h7r88\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:19 crc kubenswrapper[4867]: I0126 11:36:19.003127 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h7r88" Jan 26 11:36:21 crc kubenswrapper[4867]: I0126 11:36:21.979649 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.132582 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-config\") pod \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.132678 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjpxr\" (UniqueName: \"kubernetes.io/projected/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-kube-api-access-gjpxr\") pod \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.132716 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-sb\") pod \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.132981 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-dns-svc\") pod \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.133070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-nb\") pod \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\" (UID: \"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33\") " Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.147422 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-kube-api-access-gjpxr" (OuterVolumeSpecName: "kube-api-access-gjpxr") pod "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" (UID: "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33"). InnerVolumeSpecName "kube-api-access-gjpxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.172849 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" (UID: "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.173541 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-config" (OuterVolumeSpecName: "config") pod "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" (UID: "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.179320 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" (UID: "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.183963 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" (UID: "5c4e6125-8cf8-405e-a6ac-954e3fdd4b33"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.234948 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.234984 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.234995 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.235004 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjpxr\" (UniqueName: \"kubernetes.io/projected/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-kube-api-access-gjpxr\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.235013 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.265451 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" event={"ID":"5c4e6125-8cf8-405e-a6ac-954e3fdd4b33","Type":"ContainerDied","Data":"2d2e9f4125c8e963df057d192c5da6c77d3203825db2be23dadf66ca5afef593"} Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.265510 4867 scope.go:117] "RemoveContainer" containerID="792662b1849d4a3a5100709caf1de5564e40e75da949cecec6e4c47f6fd20a9a" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.265535 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.297191 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-9qkhh"] Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.308036 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-9qkhh"] Jan 26 11:36:22 crc kubenswrapper[4867]: I0126 11:36:22.576146 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" path="/var/lib/kubelet/pods/5c4e6125-8cf8-405e-a6ac-954e3fdd4b33/volumes" Jan 26 11:36:24 crc kubenswrapper[4867]: I0126 11:36:24.182586 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-9qkhh" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.151735 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.326592 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d82r\" (UniqueName: \"kubernetes.io/projected/493bdea3-8931-4319-a465-16c9a4329881-kube-api-access-7d82r\") pod \"493bdea3-8931-4319-a465-16c9a4329881\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.326944 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-scripts\") pod \"493bdea3-8931-4319-a465-16c9a4329881\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.326962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-fernet-keys\") pod \"493bdea3-8931-4319-a465-16c9a4329881\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.326989 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-combined-ca-bundle\") pod \"493bdea3-8931-4319-a465-16c9a4329881\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.327014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-credential-keys\") pod \"493bdea3-8931-4319-a465-16c9a4329881\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.327044 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-config-data\") pod \"493bdea3-8931-4319-a465-16c9a4329881\" (UID: \"493bdea3-8931-4319-a465-16c9a4329881\") " Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.337467 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/493bdea3-8931-4319-a465-16c9a4329881-kube-api-access-7d82r" (OuterVolumeSpecName: "kube-api-access-7d82r") pod "493bdea3-8931-4319-a465-16c9a4329881" (UID: "493bdea3-8931-4319-a465-16c9a4329881"). InnerVolumeSpecName "kube-api-access-7d82r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.358455 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-scripts" (OuterVolumeSpecName: "scripts") pod "493bdea3-8931-4319-a465-16c9a4329881" (UID: "493bdea3-8931-4319-a465-16c9a4329881"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.358552 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "493bdea3-8931-4319-a465-16c9a4329881" (UID: "493bdea3-8931-4319-a465-16c9a4329881"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.367425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "493bdea3-8931-4319-a465-16c9a4329881" (UID: "493bdea3-8931-4319-a465-16c9a4329881"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.407485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qqwcf" event={"ID":"493bdea3-8931-4319-a465-16c9a4329881","Type":"ContainerDied","Data":"bf4d7f7730270b339e4d2de8bf88ad3f315cb80064e5bb4db2d470d5e3c5d187"} Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.407532 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf4d7f7730270b339e4d2de8bf88ad3f315cb80064e5bb4db2d470d5e3c5d187" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.407595 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qqwcf" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.436250 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d82r\" (UniqueName: \"kubernetes.io/projected/493bdea3-8931-4319-a465-16c9a4329881-kube-api-access-7d82r\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.436278 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.436288 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.436297 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.439971 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-config-data" (OuterVolumeSpecName: "config-data") pod "493bdea3-8931-4319-a465-16c9a4329881" (UID: "493bdea3-8931-4319-a465-16c9a4329881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.440058 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "493bdea3-8931-4319-a465-16c9a4329881" (UID: "493bdea3-8931-4319-a465-16c9a4329881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.537237 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:31 crc kubenswrapper[4867]: I0126 11:36:31.537262 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/493bdea3-8931-4319-a465-16c9a4329881-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:31 crc kubenswrapper[4867]: E0126 11:36:31.870594 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 26 11:36:31 crc kubenswrapper[4867]: E0126 11:36:31.870762 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l22rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-v7xht_openstack(0ee786d6-3c88-4374-a028-3a3c83b30fec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:36:31 crc kubenswrapper[4867]: E0126 11:36:31.871922 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-v7xht" podUID="0ee786d6-3c88-4374-a028-3a3c83b30fec" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.235606 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qqwcf"] Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.244102 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qqwcf"] Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.329974 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zzdf7"] Jan 26 11:36:32 crc kubenswrapper[4867]: E0126 11:36:32.330632 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerName="init" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.330716 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerName="init" Jan 26 11:36:32 crc kubenswrapper[4867]: E0126 11:36:32.330789 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493bdea3-8931-4319-a465-16c9a4329881" containerName="keystone-bootstrap" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.330862 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="493bdea3-8931-4319-a465-16c9a4329881" containerName="keystone-bootstrap" Jan 26 11:36:32 crc kubenswrapper[4867]: E0126 11:36:32.330930 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerName="dnsmasq-dns" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.330990 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerName="dnsmasq-dns" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.331206 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="493bdea3-8931-4319-a465-16c9a4329881" containerName="keystone-bootstrap" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.331309 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4e6125-8cf8-405e-a6ac-954e3fdd4b33" containerName="dnsmasq-dns" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.331933 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.334292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r6w6v" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.334810 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.334922 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.335181 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.352359 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zzdf7"] Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.361965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-scripts\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.362285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-fernet-keys\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.362432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-config-data\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.362625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6wl\" (UniqueName: \"kubernetes.io/projected/95310b01-10f6-410f-9153-d2cd939420ec-kube-api-access-5r6wl\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.362748 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-combined-ca-bundle\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.362880 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-credential-keys\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: E0126 11:36:32.417169 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-v7xht" podUID="0ee786d6-3c88-4374-a028-3a3c83b30fec" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.464969 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6wl\" (UniqueName: \"kubernetes.io/projected/95310b01-10f6-410f-9153-d2cd939420ec-kube-api-access-5r6wl\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.465052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-combined-ca-bundle\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.465085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-credential-keys\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.465167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-scripts\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.465230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-fernet-keys\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.465266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-config-data\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.468952 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-scripts\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.469497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-fernet-keys\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.470202 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-credential-keys\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.479610 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-combined-ca-bundle\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.481748 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6wl\" (UniqueName: \"kubernetes.io/projected/95310b01-10f6-410f-9153-d2cd939420ec-kube-api-access-5r6wl\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.486601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-config-data\") pod \"keystone-bootstrap-zzdf7\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.575487 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="493bdea3-8931-4319-a465-16c9a4329881" path="/var/lib/kubelet/pods/493bdea3-8931-4319-a465-16c9a4329881/volumes" Jan 26 11:36:32 crc kubenswrapper[4867]: I0126 11:36:32.654863 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:33 crc kubenswrapper[4867]: I0126 11:36:33.093568 4867 scope.go:117] "RemoveContainer" containerID="cd01462e889fbb7a7f90b811f07f418d147121a745c568b3de4b8778ad6cafe6" Jan 26 11:36:33 crc kubenswrapper[4867]: E0126 11:36:33.105064 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 26 11:36:33 crc kubenswrapper[4867]: E0126 11:36:33.105262 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jqsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2kgmw_openstack(d28fe2ce-f40e-4f37-9d27-57d14376fc5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:36:33 crc kubenswrapper[4867]: E0126 11:36:33.110175 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2kgmw" podUID="d28fe2ce-f40e-4f37-9d27-57d14376fc5d" Jan 26 11:36:33 crc kubenswrapper[4867]: I0126 11:36:33.425566 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerStarted","Data":"371be553f53d8dfe3839994403af1d5cd6b544c4a15d5b7863c94b62b479c5bb"} Jan 26 11:36:33 crc kubenswrapper[4867]: I0126 11:36:33.427586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" event={"ID":"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3","Type":"ContainerStarted","Data":"110de7fd5863d1df48a55b8976982f09d8edd06d6a88b42175cbd3c71c5d7878"} Jan 26 11:36:33 crc kubenswrapper[4867]: I0126 11:36:33.433006 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4whss" event={"ID":"5c210d27-ca0b-4d51-b462-bc5adf4dbe43","Type":"ContainerStarted","Data":"d2878d17d4d05221e224e3e6a7d178637778cbcdc20b459309d7ee9af2ef93a2"} Jan 26 11:36:33 crc kubenswrapper[4867]: E0126 11:36:33.435981 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2kgmw" podUID="d28fe2ce-f40e-4f37-9d27-57d14376fc5d" Jan 26 11:36:33 crc kubenswrapper[4867]: I0126 11:36:33.475505 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4whss" podStartSLOduration=2.7582542930000002 podStartE2EDuration="25.475464509s" podCreationTimestamp="2026-01-26 11:36:08 +0000 UTC" firstStartedPulling="2026-01-26 11:36:10.358952379 +0000 UTC m=+1120.057527289" lastFinishedPulling="2026-01-26 11:36:33.076162595 +0000 UTC m=+1142.774737505" observedRunningTime="2026-01-26 11:36:33.471810592 +0000 UTC m=+1143.170385512" watchObservedRunningTime="2026-01-26 11:36:33.475464509 +0000 UTC m=+1143.174039419" Jan 26 11:36:33 crc kubenswrapper[4867]: I0126 11:36:33.564757 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-h7r88"] Jan 26 11:36:33 crc kubenswrapper[4867]: I0126 11:36:33.645672 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zzdf7"] Jan 26 11:36:33 crc kubenswrapper[4867]: W0126 11:36:33.654234 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95310b01_10f6_410f_9153_d2cd939420ec.slice/crio-33f541a5ebc648d6056acc0632ec0ca611a40a7ad8405975959fbfd2e308722c WatchSource:0}: Error finding container 33f541a5ebc648d6056acc0632ec0ca611a40a7ad8405975959fbfd2e308722c: Status 404 returned error can't find the container with id 33f541a5ebc648d6056acc0632ec0ca611a40a7ad8405975959fbfd2e308722c Jan 26 11:36:34 crc kubenswrapper[4867]: I0126 11:36:34.450745 4867 generic.go:334] "Generic (PLEG): container finished" podID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerID="110de7fd5863d1df48a55b8976982f09d8edd06d6a88b42175cbd3c71c5d7878" exitCode=0 Jan 26 11:36:34 crc kubenswrapper[4867]: I0126 11:36:34.450814 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" event={"ID":"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3","Type":"ContainerDied","Data":"110de7fd5863d1df48a55b8976982f09d8edd06d6a88b42175cbd3c71c5d7878"} Jan 26 11:36:34 crc kubenswrapper[4867]: I0126 11:36:34.454391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h7r88" event={"ID":"3de6837e-5965-48ce-9967-2d259829ad4a","Type":"ContainerStarted","Data":"d1f324fa938909ddba94d836aa478e85d2572d6ec06f08b022affb02748d74a7"} Jan 26 11:36:34 crc kubenswrapper[4867]: I0126 11:36:34.457606 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzdf7" event={"ID":"95310b01-10f6-410f-9153-d2cd939420ec","Type":"ContainerStarted","Data":"f84b09d65e7130b02b289d6cdfe83c3c3ffb0ac581c432c663743106dbc1a290"} Jan 26 11:36:34 crc kubenswrapper[4867]: I0126 11:36:34.457897 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzdf7" event={"ID":"95310b01-10f6-410f-9153-d2cd939420ec","Type":"ContainerStarted","Data":"33f541a5ebc648d6056acc0632ec0ca611a40a7ad8405975959fbfd2e308722c"} Jan 26 11:36:34 crc kubenswrapper[4867]: I0126 11:36:34.541086 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zzdf7" podStartSLOduration=2.541062187 podStartE2EDuration="2.541062187s" podCreationTimestamp="2026-01-26 11:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:34.520799606 +0000 UTC m=+1144.219374526" watchObservedRunningTime="2026-01-26 11:36:34.541062187 +0000 UTC m=+1144.239637097" Jan 26 11:36:35 crc kubenswrapper[4867]: I0126 11:36:35.470945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" event={"ID":"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3","Type":"ContainerStarted","Data":"d66add771a9a997b8c8de68dad914e1d44fa9807c3981c9e7bd9c88c60be8215"} Jan 26 11:36:35 crc kubenswrapper[4867]: I0126 11:36:35.471235 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:36 crc kubenswrapper[4867]: I0126 11:36:36.296536 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:36:36 crc kubenswrapper[4867]: I0126 11:36:36.296987 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:36:37 crc kubenswrapper[4867]: I0126 11:36:37.489132 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c210d27-ca0b-4d51-b462-bc5adf4dbe43" containerID="d2878d17d4d05221e224e3e6a7d178637778cbcdc20b459309d7ee9af2ef93a2" exitCode=0 Jan 26 11:36:37 crc kubenswrapper[4867]: I0126 11:36:37.489528 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4whss" event={"ID":"5c210d27-ca0b-4d51-b462-bc5adf4dbe43","Type":"ContainerDied","Data":"d2878d17d4d05221e224e3e6a7d178637778cbcdc20b459309d7ee9af2ef93a2"} Jan 26 11:36:37 crc kubenswrapper[4867]: I0126 11:36:37.494665 4867 generic.go:334] "Generic (PLEG): container finished" podID="95310b01-10f6-410f-9153-d2cd939420ec" containerID="f84b09d65e7130b02b289d6cdfe83c3c3ffb0ac581c432c663743106dbc1a290" exitCode=0 Jan 26 11:36:37 crc kubenswrapper[4867]: I0126 11:36:37.494756 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzdf7" event={"ID":"95310b01-10f6-410f-9153-d2cd939420ec","Type":"ContainerDied","Data":"f84b09d65e7130b02b289d6cdfe83c3c3ffb0ac581c432c663743106dbc1a290"} Jan 26 11:36:37 crc kubenswrapper[4867]: I0126 11:36:37.496578 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa78acbb-8b93-4977-8ccf-fc79314b6f2e" containerID="1651081e3e71989b143ba29eb5321a628e4ecf50a4caa7578e4fa1cc3dd87ad3" exitCode=0 Jan 26 11:36:37 crc kubenswrapper[4867]: I0126 11:36:37.497547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mjdws" event={"ID":"fa78acbb-8b93-4977-8ccf-fc79314b6f2e","Type":"ContainerDied","Data":"1651081e3e71989b143ba29eb5321a628e4ecf50a4caa7578e4fa1cc3dd87ad3"} Jan 26 11:36:37 crc kubenswrapper[4867]: I0126 11:36:37.512285 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" podStartSLOduration=24.512268156 podStartE2EDuration="24.512268156s" podCreationTimestamp="2026-01-26 11:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:35.495736513 +0000 UTC m=+1145.194311443" watchObservedRunningTime="2026-01-26 11:36:37.512268156 +0000 UTC m=+1147.210843066" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.161496 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.192186 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4whss" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.196364 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mjdws" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321300 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-scripts\") pod \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321398 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-combined-ca-bundle\") pod \"95310b01-10f6-410f-9153-d2cd939420ec\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321432 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-db-sync-config-data\") pod \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-957kf\" (UniqueName: \"kubernetes.io/projected/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-kube-api-access-957kf\") pod \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321546 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsd9z\" (UniqueName: \"kubernetes.io/projected/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-kube-api-access-fsd9z\") pod \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321575 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-combined-ca-bundle\") pod \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-credential-keys\") pod \"95310b01-10f6-410f-9153-d2cd939420ec\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321611 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-config-data\") pod \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\" (UID: \"fa78acbb-8b93-4977-8ccf-fc79314b6f2e\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321657 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-scripts\") pod \"95310b01-10f6-410f-9153-d2cd939420ec\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321693 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-fernet-keys\") pod \"95310b01-10f6-410f-9153-d2cd939420ec\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321746 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r6wl\" (UniqueName: \"kubernetes.io/projected/95310b01-10f6-410f-9153-d2cd939420ec-kube-api-access-5r6wl\") pod \"95310b01-10f6-410f-9153-d2cd939420ec\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321780 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-combined-ca-bundle\") pod \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321827 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-config-data\") pod \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-config-data\") pod \"95310b01-10f6-410f-9153-d2cd939420ec\" (UID: \"95310b01-10f6-410f-9153-d2cd939420ec\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.321928 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-logs\") pod \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\" (UID: \"5c210d27-ca0b-4d51-b462-bc5adf4dbe43\") " Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.322622 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-logs" (OuterVolumeSpecName: "logs") pod "5c210d27-ca0b-4d51-b462-bc5adf4dbe43" (UID: "5c210d27-ca0b-4d51-b462-bc5adf4dbe43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.325922 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-scripts" (OuterVolumeSpecName: "scripts") pod "5c210d27-ca0b-4d51-b462-bc5adf4dbe43" (UID: "5c210d27-ca0b-4d51-b462-bc5adf4dbe43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.326578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "95310b01-10f6-410f-9153-d2cd939420ec" (UID: "95310b01-10f6-410f-9153-d2cd939420ec"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.329069 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "95310b01-10f6-410f-9153-d2cd939420ec" (UID: "95310b01-10f6-410f-9153-d2cd939420ec"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.331840 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95310b01-10f6-410f-9153-d2cd939420ec-kube-api-access-5r6wl" (OuterVolumeSpecName: "kube-api-access-5r6wl") pod "95310b01-10f6-410f-9153-d2cd939420ec" (UID: "95310b01-10f6-410f-9153-d2cd939420ec"). InnerVolumeSpecName "kube-api-access-5r6wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.332821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-scripts" (OuterVolumeSpecName: "scripts") pod "95310b01-10f6-410f-9153-d2cd939420ec" (UID: "95310b01-10f6-410f-9153-d2cd939420ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.332954 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fa78acbb-8b93-4977-8ccf-fc79314b6f2e" (UID: "fa78acbb-8b93-4977-8ccf-fc79314b6f2e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.333536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-kube-api-access-fsd9z" (OuterVolumeSpecName: "kube-api-access-fsd9z") pod "5c210d27-ca0b-4d51-b462-bc5adf4dbe43" (UID: "5c210d27-ca0b-4d51-b462-bc5adf4dbe43"). InnerVolumeSpecName "kube-api-access-fsd9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.337538 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-kube-api-access-957kf" (OuterVolumeSpecName: "kube-api-access-957kf") pod "fa78acbb-8b93-4977-8ccf-fc79314b6f2e" (UID: "fa78acbb-8b93-4977-8ccf-fc79314b6f2e"). InnerVolumeSpecName "kube-api-access-957kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.351162 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95310b01-10f6-410f-9153-d2cd939420ec" (UID: "95310b01-10f6-410f-9153-d2cd939420ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.351627 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-config-data" (OuterVolumeSpecName: "config-data") pod "5c210d27-ca0b-4d51-b462-bc5adf4dbe43" (UID: "5c210d27-ca0b-4d51-b462-bc5adf4dbe43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.355407 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-config-data" (OuterVolumeSpecName: "config-data") pod "95310b01-10f6-410f-9153-d2cd939420ec" (UID: "95310b01-10f6-410f-9153-d2cd939420ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.363423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c210d27-ca0b-4d51-b462-bc5adf4dbe43" (UID: "5c210d27-ca0b-4d51-b462-bc5adf4dbe43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.366789 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa78acbb-8b93-4977-8ccf-fc79314b6f2e" (UID: "fa78acbb-8b93-4977-8ccf-fc79314b6f2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.376034 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-config-data" (OuterVolumeSpecName: "config-data") pod "fa78acbb-8b93-4977-8ccf-fc79314b6f2e" (UID: "fa78acbb-8b93-4977-8ccf-fc79314b6f2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423475 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r6wl\" (UniqueName: \"kubernetes.io/projected/95310b01-10f6-410f-9153-d2cd939420ec-kube-api-access-5r6wl\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423505 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423515 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423524 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423532 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423540 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423547 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423555 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423563 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-957kf\" (UniqueName: \"kubernetes.io/projected/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-kube-api-access-957kf\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423570 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsd9z\" (UniqueName: \"kubernetes.io/projected/5c210d27-ca0b-4d51-b462-bc5adf4dbe43-kube-api-access-fsd9z\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423577 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423585 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423592 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78acbb-8b93-4977-8ccf-fc79314b6f2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423600 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.423608 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95310b01-10f6-410f-9153-d2cd939420ec-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.524464 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zzdf7" event={"ID":"95310b01-10f6-410f-9153-d2cd939420ec","Type":"ContainerDied","Data":"33f541a5ebc648d6056acc0632ec0ca611a40a7ad8405975959fbfd2e308722c"} Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.524563 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33f541a5ebc648d6056acc0632ec0ca611a40a7ad8405975959fbfd2e308722c" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.524546 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zzdf7" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.528793 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mjdws" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.529355 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mjdws" event={"ID":"fa78acbb-8b93-4977-8ccf-fc79314b6f2e","Type":"ContainerDied","Data":"231d68a193c100b8af44d3b707d49cc2567ebb0b3f5a4fca02453194e4f91810"} Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.529398 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="231d68a193c100b8af44d3b707d49cc2567ebb0b3f5a4fca02453194e4f91810" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.530736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4whss" event={"ID":"5c210d27-ca0b-4d51-b462-bc5adf4dbe43","Type":"ContainerDied","Data":"07f01e4a959767905976a1905050cb7e13d80b149ea5f85aa89ae19ed1d436a7"} Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.530763 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07f01e4a959767905976a1905050cb7e13d80b149ea5f85aa89ae19ed1d436a7" Jan 26 11:36:40 crc kubenswrapper[4867]: I0126 11:36:40.530813 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4whss" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.384075 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f94776d6f-8b6q4"] Jan 26 11:36:41 crc kubenswrapper[4867]: E0126 11:36:41.387821 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c210d27-ca0b-4d51-b462-bc5adf4dbe43" containerName="placement-db-sync" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.387850 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c210d27-ca0b-4d51-b462-bc5adf4dbe43" containerName="placement-db-sync" Jan 26 11:36:41 crc kubenswrapper[4867]: E0126 11:36:41.387874 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa78acbb-8b93-4977-8ccf-fc79314b6f2e" containerName="glance-db-sync" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.387883 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa78acbb-8b93-4977-8ccf-fc79314b6f2e" containerName="glance-db-sync" Jan 26 11:36:41 crc kubenswrapper[4867]: E0126 11:36:41.387927 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95310b01-10f6-410f-9153-d2cd939420ec" containerName="keystone-bootstrap" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.387940 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="95310b01-10f6-410f-9153-d2cd939420ec" containerName="keystone-bootstrap" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.388368 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="95310b01-10f6-410f-9153-d2cd939420ec" containerName="keystone-bootstrap" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.388396 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa78acbb-8b93-4977-8ccf-fc79314b6f2e" containerName="glance-db-sync" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.388416 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c210d27-ca0b-4d51-b462-bc5adf4dbe43" containerName="placement-db-sync" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.392342 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.409203 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.409362 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.409586 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.409684 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r6w6v" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.410391 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.410618 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.441986 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f94776d6f-8b6q4"] Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.456874 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-547bc4f4d-xs5kd"] Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.619443 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623748 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-credential-keys\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623788 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-config-data\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-internal-tls-certs\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-fernet-keys\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623872 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-scripts\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-public-tls-certs\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623971 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-combined-ca-bundle\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.623997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdhn\" (UniqueName: \"kubernetes.io/projected/6fa27242-a46c-4987-9e2f-1f9d48b370e7-kube-api-access-bqdhn\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.625666 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.626049 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.626883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9spjg" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.627025 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.627143 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.653133 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-547bc4f4d-xs5kd"] Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.676686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerStarted","Data":"de0b6fd3cca4db9816920ab91a17889dfa377b8d7f1cdea01c1ada0a80fea17b"} Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.693736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h7r88" event={"ID":"3de6837e-5965-48ce-9967-2d259829ad4a","Type":"ContainerStarted","Data":"655482515d2a0761eeebac3b50d918564351ad2f219cde106cc66a7bfd48ce2c"} Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.726869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-scripts\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-credential-keys\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727175 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-config-data\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727258 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-internal-tls-certs\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727340 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-fernet-keys\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727432 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-scripts\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-combined-ca-bundle\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-public-tls-certs\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727642 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-public-tls-certs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727702 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-internal-tls-certs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727788 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe54576-9f68-4335-9449-16f7af831e94-logs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727861 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-combined-ca-bundle\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727938 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdhn\" (UniqueName: \"kubernetes.io/projected/6fa27242-a46c-4987-9e2f-1f9d48b370e7-kube-api-access-bqdhn\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.727996 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-config-data\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.728082 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmt7s\" (UniqueName: \"kubernetes.io/projected/3fe54576-9f68-4335-9449-16f7af831e94-kube-api-access-xmt7s\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.735996 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-internal-tls-certs\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.736766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-credential-keys\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.746974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-config-data\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.747479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-fernet-keys\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.755722 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-public-tls-certs\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.755987 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-scripts\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.757336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa27242-a46c-4987-9e2f-1f9d48b370e7-combined-ca-bundle\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.760481 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdhn\" (UniqueName: \"kubernetes.io/projected/6fa27242-a46c-4987-9e2f-1f9d48b370e7-kube-api-access-bqdhn\") pod \"keystone-6f94776d6f-8b6q4\" (UID: \"6fa27242-a46c-4987-9e2f-1f9d48b370e7\") " pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.817725 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mfszq"] Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.818286 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" podUID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerName="dnsmasq-dns" containerID="cri-o://d66add771a9a997b8c8de68dad914e1d44fa9807c3981c9e7bd9c88c60be8215" gracePeriod=10 Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.821039 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.831167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-internal-tls-certs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.831397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe54576-9f68-4335-9449-16f7af831e94-logs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.831494 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-config-data\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.831563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmt7s\" (UniqueName: \"kubernetes.io/projected/3fe54576-9f68-4335-9449-16f7af831e94-kube-api-access-xmt7s\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.831661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-scripts\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.831785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-combined-ca-bundle\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.831880 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-public-tls-certs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.833146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe54576-9f68-4335-9449-16f7af831e94-logs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.838529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-combined-ca-bundle\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.838882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-scripts\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.841704 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-public-tls-certs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.847102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-internal-tls-certs\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.852460 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-m89kb"] Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.854444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.871351 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe54576-9f68-4335-9449-16f7af831e94-config-data\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.882802 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-m89kb"] Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.883604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmt7s\" (UniqueName: \"kubernetes.io/projected/3fe54576-9f68-4335-9449-16f7af831e94-kube-api-access-xmt7s\") pod \"placement-547bc4f4d-xs5kd\" (UID: \"3fe54576-9f68-4335-9449-16f7af831e94\") " pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:41 crc kubenswrapper[4867]: I0126 11:36:41.958724 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.035963 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.036207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-config\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.036360 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.036452 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.036603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.036757 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69mrq\" (UniqueName: \"kubernetes.io/projected/227ae5b6-e7d6-45ce-b333-3dd508d56b35-kube-api-access-69mrq\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.036840 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.139446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.139495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.139566 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.139643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69mrq\" (UniqueName: \"kubernetes.io/projected/227ae5b6-e7d6-45ce-b333-3dd508d56b35-kube-api-access-69mrq\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.139670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.139701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-config\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.141084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-config\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.141917 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.146337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.147344 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.148069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.193191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69mrq\" (UniqueName: \"kubernetes.io/projected/227ae5b6-e7d6-45ce-b333-3dd508d56b35-kube-api-access-69mrq\") pod \"dnsmasq-dns-56df8fb6b7-m89kb\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.448022 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.558325 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-547bc4f4d-xs5kd"] Jan 26 11:36:42 crc kubenswrapper[4867]: W0126 11:36:42.568999 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fe54576_9f68_4335_9449_16f7af831e94.slice/crio-3eceda2c0e71c208ce69f8848cca2af51039df29bb4d29718f6747b8567cd628 WatchSource:0}: Error finding container 3eceda2c0e71c208ce69f8848cca2af51039df29bb4d29718f6747b8567cd628: Status 404 returned error can't find the container with id 3eceda2c0e71c208ce69f8848cca2af51039df29bb4d29718f6747b8567cd628 Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.711570 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f94776d6f-8b6q4"] Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.730787 4867 generic.go:334] "Generic (PLEG): container finished" podID="3de6837e-5965-48ce-9967-2d259829ad4a" containerID="655482515d2a0761eeebac3b50d918564351ad2f219cde106cc66a7bfd48ce2c" exitCode=0 Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.730848 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h7r88" event={"ID":"3de6837e-5965-48ce-9967-2d259829ad4a","Type":"ContainerDied","Data":"655482515d2a0761eeebac3b50d918564351ad2f219cde106cc66a7bfd48ce2c"} Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.732896 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-547bc4f4d-xs5kd" event={"ID":"3fe54576-9f68-4335-9449-16f7af831e94","Type":"ContainerStarted","Data":"3eceda2c0e71c208ce69f8848cca2af51039df29bb4d29718f6747b8567cd628"} Jan 26 11:36:42 crc kubenswrapper[4867]: W0126 11:36:42.767429 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fa27242_a46c_4987_9e2f_1f9d48b370e7.slice/crio-be47ca0c068c0a794ea412cc86fd393fd1d457a8964719b37fcd6556b1d64518 WatchSource:0}: Error finding container be47ca0c068c0a794ea412cc86fd393fd1d457a8964719b37fcd6556b1d64518: Status 404 returned error can't find the container with id be47ca0c068c0a794ea412cc86fd393fd1d457a8964719b37fcd6556b1d64518 Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.833885 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.836380 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.845635 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.845814 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wqrz8" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.845920 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.849694 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.957445 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-scripts\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.957518 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.957539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-logs\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.957615 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ck78\" (UniqueName: \"kubernetes.io/projected/1822d382-9cff-4a22-82bb-e4954f192847-kube-api-access-6ck78\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.957647 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.957687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.957730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-config-data\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:42 crc kubenswrapper[4867]: W0126 11:36:42.974511 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod227ae5b6_e7d6_45ce_b333_3dd508d56b35.slice/crio-db5a53e4d6d3a31585951729dbce0a90f5d5d16a246c215ec38f4b3b6442304c WatchSource:0}: Error finding container db5a53e4d6d3a31585951729dbce0a90f5d5d16a246c215ec38f4b3b6442304c: Status 404 returned error can't find the container with id db5a53e4d6d3a31585951729dbce0a90f5d5d16a246c215ec38f4b3b6442304c Jan 26 11:36:42 crc kubenswrapper[4867]: I0126 11:36:42.979611 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-m89kb"] Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.053160 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.055203 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059572 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059613 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059750 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-config-data\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-scripts\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059842 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-logs\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.059958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ck78\" (UniqueName: \"kubernetes.io/projected/1822d382-9cff-4a22-82bb-e4954f192847-kube-api-access-6ck78\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.060315 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.065330 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.066288 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-logs\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.074427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-config-data\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.074995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.081643 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-scripts\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.085424 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.093485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ck78\" (UniqueName: \"kubernetes.io/projected/1822d382-9cff-4a22-82bb-e4954f192847-kube-api-access-6ck78\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.117354 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.161532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.161616 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.161660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5spp\" (UniqueName: \"kubernetes.io/projected/b0a89f19-00f0-4d65-9286-67f669f50d8a-kube-api-access-l5spp\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.161731 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.161812 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.161846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.161871 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.192463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.263371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.263433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.263454 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.263495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.263520 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.263548 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5spp\" (UniqueName: \"kubernetes.io/projected/b0a89f19-00f0-4d65-9286-67f669f50d8a-kube-api-access-l5spp\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.263595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.264177 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.264323 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.264506 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.272550 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.274490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.275680 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.290857 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5spp\" (UniqueName: \"kubernetes.io/projected/b0a89f19-00f0-4d65-9286-67f669f50d8a-kube-api-access-l5spp\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.383990 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.615962 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.753576 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f94776d6f-8b6q4" event={"ID":"6fa27242-a46c-4987-9e2f-1f9d48b370e7","Type":"ContainerStarted","Data":"064d96fc5ec532b2eea1d4673f4b06ce0baa7567ee17a916578bb9a388fd5923"} Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.753623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f94776d6f-8b6q4" event={"ID":"6fa27242-a46c-4987-9e2f-1f9d48b370e7","Type":"ContainerStarted","Data":"be47ca0c068c0a794ea412cc86fd393fd1d457a8964719b37fcd6556b1d64518"} Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.754473 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.759112 4867 generic.go:334] "Generic (PLEG): container finished" podID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerID="d66add771a9a997b8c8de68dad914e1d44fa9807c3981c9e7bd9c88c60be8215" exitCode=0 Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.759200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" event={"ID":"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3","Type":"ContainerDied","Data":"d66add771a9a997b8c8de68dad914e1d44fa9807c3981c9e7bd9c88c60be8215"} Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.760097 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.762643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" event={"ID":"227ae5b6-e7d6-45ce-b333-3dd508d56b35","Type":"ContainerStarted","Data":"d1a32962ff800086c7304e4d7adc1f221caccda3a92d99dbaf7aaa13a5eed3bc"} Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.762682 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" event={"ID":"227ae5b6-e7d6-45ce-b333-3dd508d56b35","Type":"ContainerStarted","Data":"db5a53e4d6d3a31585951729dbce0a90f5d5d16a246c215ec38f4b3b6442304c"} Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.806051 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f94776d6f-8b6q4" podStartSLOduration=2.806034238 podStartE2EDuration="2.806034238s" podCreationTimestamp="2026-01-26 11:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:43.803150501 +0000 UTC m=+1153.501725401" watchObservedRunningTime="2026-01-26 11:36:43.806034238 +0000 UTC m=+1153.504609138" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.856844 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.898017 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hdzb\" (UniqueName: \"kubernetes.io/projected/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-kube-api-access-8hdzb\") pod \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.898085 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-svc\") pod \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.898153 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-config\") pod \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.898235 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-nb\") pod \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.898297 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-swift-storage-0\") pod \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.898459 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-sb\") pod \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\" (UID: \"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3\") " Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.912969 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-kube-api-access-8hdzb" (OuterVolumeSpecName: "kube-api-access-8hdzb") pod "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" (UID: "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3"). InnerVolumeSpecName "kube-api-access-8hdzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.963890 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" (UID: "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.972133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" (UID: "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.992871 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-config" (OuterVolumeSpecName: "config") pod "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" (UID: "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:43 crc kubenswrapper[4867]: I0126 11:36:43.993873 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" (UID: "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.000933 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.000966 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hdzb\" (UniqueName: \"kubernetes.io/projected/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-kube-api-access-8hdzb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.000983 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.000994 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.001002 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.029199 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" (UID: "d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.103297 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.388184 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:44 crc kubenswrapper[4867]: W0126 11:36:44.407303 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0a89f19_00f0_4d65_9286_67f669f50d8a.slice/crio-d9e916bb16e2ee4b1cb6413eca3ad3d0c057b889450915176ef8f127dba12b38 WatchSource:0}: Error finding container d9e916bb16e2ee4b1cb6413eca3ad3d0c057b889450915176ef8f127dba12b38: Status 404 returned error can't find the container with id d9e916bb16e2ee4b1cb6413eca3ad3d0c057b889450915176ef8f127dba12b38 Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.823487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a89f19-00f0-4d65-9286-67f669f50d8a","Type":"ContainerStarted","Data":"d9e916bb16e2ee4b1cb6413eca3ad3d0c057b889450915176ef8f127dba12b38"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.834799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h7r88" event={"ID":"3de6837e-5965-48ce-9967-2d259829ad4a","Type":"ContainerStarted","Data":"2d5200b98116c0502f815fd7e1409fcdd257bdc9acd7c8160ea6187f2e4fe98d"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.849048 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-547bc4f4d-xs5kd" event={"ID":"3fe54576-9f68-4335-9449-16f7af831e94","Type":"ContainerStarted","Data":"094cd9322bf21ae572386bc05d73d983539094b8df3f95bcb35187db33ff10ed"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.849154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-547bc4f4d-xs5kd" event={"ID":"3fe54576-9f68-4335-9449-16f7af831e94","Type":"ContainerStarted","Data":"b3d29492b296cef898726b3fe9e376e727dc90dbfa69d7c28b128858c5ce3225"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.850648 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.850683 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.862853 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1822d382-9cff-4a22-82bb-e4954f192847","Type":"ContainerStarted","Data":"47e04b9d0b049c5a10576c1a7aad203d99736762f051635bef5a1a3e4b9af4c7"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.862904 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1822d382-9cff-4a22-82bb-e4954f192847","Type":"ContainerStarted","Data":"3ca2c102c072e6b63958d0b5727eb8ed03aa6ecd15dca4f275b02d9f5bf9dffd"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.865665 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-h7r88" podStartSLOduration=20.397316392 podStartE2EDuration="26.865639306s" podCreationTimestamp="2026-01-26 11:36:18 +0000 UTC" firstStartedPulling="2026-01-26 11:36:33.567306012 +0000 UTC m=+1143.265880922" lastFinishedPulling="2026-01-26 11:36:40.035628926 +0000 UTC m=+1149.734203836" observedRunningTime="2026-01-26 11:36:44.856724428 +0000 UTC m=+1154.555299338" watchObservedRunningTime="2026-01-26 11:36:44.865639306 +0000 UTC m=+1154.564214216" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.895435 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-547bc4f4d-xs5kd" podStartSLOduration=3.895414052 podStartE2EDuration="3.895414052s" podCreationTimestamp="2026-01-26 11:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:44.894313612 +0000 UTC m=+1154.592888522" watchObservedRunningTime="2026-01-26 11:36:44.895414052 +0000 UTC m=+1154.593988962" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.901091 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" event={"ID":"d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3","Type":"ContainerDied","Data":"a8f70966986086bd2e630bc803b115f94046241488c5f390806a7862d72d5189"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.901155 4867 scope.go:117] "RemoveContainer" containerID="d66add771a9a997b8c8de68dad914e1d44fa9807c3981c9e7bd9c88c60be8215" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.901332 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-mfszq" Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.911722 4867 generic.go:334] "Generic (PLEG): container finished" podID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerID="d1a32962ff800086c7304e4d7adc1f221caccda3a92d99dbaf7aaa13a5eed3bc" exitCode=0 Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.911921 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" event={"ID":"227ae5b6-e7d6-45ce-b333-3dd508d56b35","Type":"ContainerDied","Data":"d1a32962ff800086c7304e4d7adc1f221caccda3a92d99dbaf7aaa13a5eed3bc"} Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.949121 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mfszq"] Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.968941 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-mfszq"] Jan 26 11:36:44 crc kubenswrapper[4867]: I0126 11:36:44.973678 4867 scope.go:117] "RemoveContainer" containerID="110de7fd5863d1df48a55b8976982f09d8edd06d6a88b42175cbd3c71c5d7878" Jan 26 11:36:45 crc kubenswrapper[4867]: I0126 11:36:45.708847 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:45 crc kubenswrapper[4867]: I0126 11:36:45.786919 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.583174 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" path="/var/lib/kubelet/pods/d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3/volumes" Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.944494 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" event={"ID":"227ae5b6-e7d6-45ce-b333-3dd508d56b35","Type":"ContainerStarted","Data":"29e32d6b11200c281e17114562769683762032efb9c74d667e3d5716b6829560"} Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.945929 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.949834 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a89f19-00f0-4d65-9286-67f669f50d8a","Type":"ContainerStarted","Data":"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4"} Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.954500 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-log" containerID="cri-o://47e04b9d0b049c5a10576c1a7aad203d99736762f051635bef5a1a3e4b9af4c7" gracePeriod=30 Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.955683 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-httpd" containerID="cri-o://5fe23494b36f64c3c69b701aba0b7c4f69247af9ea8bf6f80b4d4637a52d1535" gracePeriod=30 Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.957894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1822d382-9cff-4a22-82bb-e4954f192847","Type":"ContainerStarted","Data":"5fe23494b36f64c3c69b701aba0b7c4f69247af9ea8bf6f80b4d4637a52d1535"} Jan 26 11:36:46 crc kubenswrapper[4867]: I0126 11:36:46.996814 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" podStartSLOduration=5.996790081 podStartE2EDuration="5.996790081s" podCreationTimestamp="2026-01-26 11:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:46.981383499 +0000 UTC m=+1156.679958429" watchObservedRunningTime="2026-01-26 11:36:46.996790081 +0000 UTC m=+1156.695364991" Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.019934 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.019910668 podStartE2EDuration="6.019910668s" podCreationTimestamp="2026-01-26 11:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:47.011125864 +0000 UTC m=+1156.709700774" watchObservedRunningTime="2026-01-26 11:36:47.019910668 +0000 UTC m=+1156.718485578" Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.970378 4867 generic.go:334] "Generic (PLEG): container finished" podID="1822d382-9cff-4a22-82bb-e4954f192847" containerID="5fe23494b36f64c3c69b701aba0b7c4f69247af9ea8bf6f80b4d4637a52d1535" exitCode=0 Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.970676 4867 generic.go:334] "Generic (PLEG): container finished" podID="1822d382-9cff-4a22-82bb-e4954f192847" containerID="47e04b9d0b049c5a10576c1a7aad203d99736762f051635bef5a1a3e4b9af4c7" exitCode=143 Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.970451 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1822d382-9cff-4a22-82bb-e4954f192847","Type":"ContainerDied","Data":"5fe23494b36f64c3c69b701aba0b7c4f69247af9ea8bf6f80b4d4637a52d1535"} Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.970757 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1822d382-9cff-4a22-82bb-e4954f192847","Type":"ContainerDied","Data":"47e04b9d0b049c5a10576c1a7aad203d99736762f051635bef5a1a3e4b9af4c7"} Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.973593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a89f19-00f0-4d65-9286-67f669f50d8a","Type":"ContainerStarted","Data":"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2"} Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.974002 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-log" containerID="cri-o://4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4" gracePeriod=30 Jan 26 11:36:47 crc kubenswrapper[4867]: I0126 11:36:47.974070 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-httpd" containerID="cri-o://b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2" gracePeriod=30 Jan 26 11:36:48 crc kubenswrapper[4867]: I0126 11:36:48.014373 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.014345915 podStartE2EDuration="6.014345915s" podCreationTimestamp="2026-01-26 11:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:48.006379532 +0000 UTC m=+1157.704954482" watchObservedRunningTime="2026-01-26 11:36:48.014345915 +0000 UTC m=+1157.712920825" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.442322 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.536936 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1822d382-9cff-4a22-82bb-e4954f192847\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.537046 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-scripts\") pod \"1822d382-9cff-4a22-82bb-e4954f192847\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.537095 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-logs\") pod \"1822d382-9cff-4a22-82bb-e4954f192847\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.537160 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-httpd-run\") pod \"1822d382-9cff-4a22-82bb-e4954f192847\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.537262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ck78\" (UniqueName: \"kubernetes.io/projected/1822d382-9cff-4a22-82bb-e4954f192847-kube-api-access-6ck78\") pod \"1822d382-9cff-4a22-82bb-e4954f192847\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.537324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-combined-ca-bundle\") pod \"1822d382-9cff-4a22-82bb-e4954f192847\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.537347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-config-data\") pod \"1822d382-9cff-4a22-82bb-e4954f192847\" (UID: \"1822d382-9cff-4a22-82bb-e4954f192847\") " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.538094 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1822d382-9cff-4a22-82bb-e4954f192847" (UID: "1822d382-9cff-4a22-82bb-e4954f192847"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.538562 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-logs" (OuterVolumeSpecName: "logs") pod "1822d382-9cff-4a22-82bb-e4954f192847" (UID: "1822d382-9cff-4a22-82bb-e4954f192847"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.544270 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "1822d382-9cff-4a22-82bb-e4954f192847" (UID: "1822d382-9cff-4a22-82bb-e4954f192847"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.544999 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-scripts" (OuterVolumeSpecName: "scripts") pod "1822d382-9cff-4a22-82bb-e4954f192847" (UID: "1822d382-9cff-4a22-82bb-e4954f192847"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.545526 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1822d382-9cff-4a22-82bb-e4954f192847-kube-api-access-6ck78" (OuterVolumeSpecName: "kube-api-access-6ck78") pod "1822d382-9cff-4a22-82bb-e4954f192847" (UID: "1822d382-9cff-4a22-82bb-e4954f192847"). InnerVolumeSpecName "kube-api-access-6ck78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.589212 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1822d382-9cff-4a22-82bb-e4954f192847" (UID: "1822d382-9cff-4a22-82bb-e4954f192847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.604280 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-config-data" (OuterVolumeSpecName: "config-data") pod "1822d382-9cff-4a22-82bb-e4954f192847" (UID: "1822d382-9cff-4a22-82bb-e4954f192847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.640608 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.640651 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.640664 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.640675 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1822d382-9cff-4a22-82bb-e4954f192847-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.640689 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ck78\" (UniqueName: \"kubernetes.io/projected/1822d382-9cff-4a22-82bb-e4954f192847-kube-api-access-6ck78\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.640701 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.640712 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1822d382-9cff-4a22-82bb-e4954f192847-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.667304 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.742294 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.955366 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.994673 4867 generic.go:334] "Generic (PLEG): container finished" podID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerID="b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2" exitCode=0 Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.994705 4867 generic.go:334] "Generic (PLEG): container finished" podID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerID="4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4" exitCode=143 Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.994738 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a89f19-00f0-4d65-9286-67f669f50d8a","Type":"ContainerDied","Data":"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2"} Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.994765 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a89f19-00f0-4d65-9286-67f669f50d8a","Type":"ContainerDied","Data":"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4"} Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.994779 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a89f19-00f0-4d65-9286-67f669f50d8a","Type":"ContainerDied","Data":"d9e916bb16e2ee4b1cb6413eca3ad3d0c057b889450915176ef8f127dba12b38"} Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.994794 4867 scope.go:117] "RemoveContainer" containerID="b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.994903 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.998113 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1822d382-9cff-4a22-82bb-e4954f192847","Type":"ContainerDied","Data":"3ca2c102c072e6b63958d0b5727eb8ed03aa6ecd15dca4f275b02d9f5bf9dffd"} Jan 26 11:36:49 crc kubenswrapper[4867]: I0126 11:36:49.998162 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.027849 4867 scope.go:117] "RemoveContainer" containerID="4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.033753 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.047871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-config-data\") pod \"b0a89f19-00f0-4d65-9286-67f669f50d8a\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.047944 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-httpd-run\") pod \"b0a89f19-00f0-4d65-9286-67f669f50d8a\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.047966 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b0a89f19-00f0-4d65-9286-67f669f50d8a\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.048009 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-logs\") pod \"b0a89f19-00f0-4d65-9286-67f669f50d8a\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.048074 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-scripts\") pod \"b0a89f19-00f0-4d65-9286-67f669f50d8a\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.048157 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5spp\" (UniqueName: \"kubernetes.io/projected/b0a89f19-00f0-4d65-9286-67f669f50d8a-kube-api-access-l5spp\") pod \"b0a89f19-00f0-4d65-9286-67f669f50d8a\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.048297 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-combined-ca-bundle\") pod \"b0a89f19-00f0-4d65-9286-67f669f50d8a\" (UID: \"b0a89f19-00f0-4d65-9286-67f669f50d8a\") " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.049438 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b0a89f19-00f0-4d65-9286-67f669f50d8a" (UID: "b0a89f19-00f0-4d65-9286-67f669f50d8a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.050491 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-logs" (OuterVolumeSpecName: "logs") pod "b0a89f19-00f0-4d65-9286-67f669f50d8a" (UID: "b0a89f19-00f0-4d65-9286-67f669f50d8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.052702 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.053967 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-scripts" (OuterVolumeSpecName: "scripts") pod "b0a89f19-00f0-4d65-9286-67f669f50d8a" (UID: "b0a89f19-00f0-4d65-9286-67f669f50d8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.055123 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "b0a89f19-00f0-4d65-9286-67f669f50d8a" (UID: "b0a89f19-00f0-4d65-9286-67f669f50d8a"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.057140 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a89f19-00f0-4d65-9286-67f669f50d8a-kube-api-access-l5spp" (OuterVolumeSpecName: "kube-api-access-l5spp") pod "b0a89f19-00f0-4d65-9286-67f669f50d8a" (UID: "b0a89f19-00f0-4d65-9286-67f669f50d8a"). InnerVolumeSpecName "kube-api-access-l5spp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.064507 4867 scope.go:117] "RemoveContainer" containerID="b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2" Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.065371 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2\": container with ID starting with b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2 not found: ID does not exist" containerID="b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.065399 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2"} err="failed to get container status \"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2\": rpc error: code = NotFound desc = could not find container \"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2\": container with ID starting with b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2 not found: ID does not exist" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.065420 4867 scope.go:117] "RemoveContainer" containerID="4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4" Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.069315 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4\": container with ID starting with 4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4 not found: ID does not exist" containerID="4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.069353 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4"} err="failed to get container status \"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4\": rpc error: code = NotFound desc = could not find container \"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4\": container with ID starting with 4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4 not found: ID does not exist" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.069374 4867 scope.go:117] "RemoveContainer" containerID="b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.073312 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2"} err="failed to get container status \"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2\": rpc error: code = NotFound desc = could not find container \"b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2\": container with ID starting with b24ef383c9fe7bd0115758840af2d59811a74b46d023ceb1a6e784851395a1f2 not found: ID does not exist" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.073350 4867 scope.go:117] "RemoveContainer" containerID="4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.074670 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.075076 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerName="dnsmasq-dns" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075095 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerName="dnsmasq-dns" Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.075112 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerName="init" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075143 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerName="init" Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.075156 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-httpd" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075162 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-httpd" Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.075176 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-log" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075181 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-log" Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.075209 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-log" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075215 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-log" Jan 26 11:36:50 crc kubenswrapper[4867]: E0126 11:36:50.075252 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-httpd" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075258 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-httpd" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075447 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cc6b95-bfa3-4814-ba91-92a2ffac1ce3" containerName="dnsmasq-dns" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075465 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-log" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075501 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" containerName="glance-httpd" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075514 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-log" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.075522 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1822d382-9cff-4a22-82bb-e4954f192847" containerName="glance-httpd" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.076614 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.077554 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4"} err="failed to get container status \"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4\": rpc error: code = NotFound desc = could not find container \"4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4\": container with ID starting with 4b1e6cc998a7e3a62121b1563375ccbbd2408c21a1615decc65c5800de10d4a4 not found: ID does not exist" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.077587 4867 scope.go:117] "RemoveContainer" containerID="5fe23494b36f64c3c69b701aba0b7c4f69247af9ea8bf6f80b4d4637a52d1535" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.083103 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.083426 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.090116 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.102352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0a89f19-00f0-4d65-9286-67f669f50d8a" (UID: "b0a89f19-00f0-4d65-9286-67f669f50d8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.141759 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-config-data" (OuterVolumeSpecName: "config-data") pod "b0a89f19-00f0-4d65-9286-67f669f50d8a" (UID: "b0a89f19-00f0-4d65-9286-67f669f50d8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.149051 4867 scope.go:117] "RemoveContainer" containerID="47e04b9d0b049c5a10576c1a7aad203d99736762f051635bef5a1a3e4b9af4c7" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151180 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-scripts\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151276 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-config-data\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151295 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151342 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9k7h\" (UniqueName: \"kubernetes.io/projected/195fa02f-5887-4d8e-a103-2261e65a9c96-kube-api-access-s9k7h\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-logs\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151409 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151505 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151516 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151525 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5spp\" (UniqueName: \"kubernetes.io/projected/b0a89f19-00f0-4d65-9286-67f669f50d8a-kube-api-access-l5spp\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151536 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151544 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a89f19-00f0-4d65-9286-67f669f50d8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151552 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a89f19-00f0-4d65-9286-67f669f50d8a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.151571 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.177248 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253683 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-scripts\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-config-data\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253849 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9k7h\" (UniqueName: \"kubernetes.io/projected/195fa02f-5887-4d8e-a103-2261e65a9c96-kube-api-access-s9k7h\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253878 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-logs\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253938 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.253992 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.254159 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.254447 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.254473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-logs\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.259676 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-scripts\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.259692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-config-data\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.260374 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.261131 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.272300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9k7h\" (UniqueName: \"kubernetes.io/projected/195fa02f-5887-4d8e-a103-2261e65a9c96-kube-api-access-s9k7h\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.287663 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.412348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.419940 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.429882 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.452728 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.454564 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.461038 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.461277 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.495635 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558288 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558572 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558620 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558670 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjv49\" (UniqueName: \"kubernetes.io/projected/dfc017b6-886f-48d3-8f1e-cef59e587503-kube-api-access-qjv49\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558686 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-logs\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.558727 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.577450 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1822d382-9cff-4a22-82bb-e4954f192847" path="/var/lib/kubelet/pods/1822d382-9cff-4a22-82bb-e4954f192847/volumes" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.578369 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a89f19-00f0-4d65-9286-67f669f50d8a" path="/var/lib/kubelet/pods/b0a89f19-00f0-4d65-9286-67f669f50d8a/volumes" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.666914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.667365 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.667664 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.667704 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.667826 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.667894 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.667943 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjv49\" (UniqueName: \"kubernetes.io/projected/dfc017b6-886f-48d3-8f1e-cef59e587503-kube-api-access-qjv49\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.667969 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-logs\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.668974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-logs\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.668979 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.669641 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.682029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.684448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.689311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.691591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjv49\" (UniqueName: \"kubernetes.io/projected/dfc017b6-886f-48d3-8f1e-cef59e587503-kube-api-access-qjv49\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.699963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.746471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:36:50 crc kubenswrapper[4867]: I0126 11:36:50.843180 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:36:51 crc kubenswrapper[4867]: I0126 11:36:51.030111 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:36:51 crc kubenswrapper[4867]: I0126 11:36:51.038892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v7xht" event={"ID":"0ee786d6-3c88-4374-a028-3a3c83b30fec","Type":"ContainerStarted","Data":"3252b7222a6b1e64e2f23fc987ca7f51f1a67b89435f0ce9db433d0b986e0667"} Jan 26 11:36:51 crc kubenswrapper[4867]: I0126 11:36:51.056629 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kgmw" event={"ID":"d28fe2ce-f40e-4f37-9d27-57d14376fc5d","Type":"ContainerStarted","Data":"56eb17817f1a9ae4d04be82a79ad6e5d9ee52e4a1df45b9925caa83b5e230843"} Jan 26 11:36:51 crc kubenswrapper[4867]: I0126 11:36:51.064413 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-v7xht" podStartSLOduration=4.089891535 podStartE2EDuration="43.06439659s" podCreationTimestamp="2026-01-26 11:36:08 +0000 UTC" firstStartedPulling="2026-01-26 11:36:10.66377414 +0000 UTC m=+1120.362349050" lastFinishedPulling="2026-01-26 11:36:49.638279195 +0000 UTC m=+1159.336854105" observedRunningTime="2026-01-26 11:36:51.061212926 +0000 UTC m=+1160.759787856" watchObservedRunningTime="2026-01-26 11:36:51.06439659 +0000 UTC m=+1160.762971500" Jan 26 11:36:51 crc kubenswrapper[4867]: I0126 11:36:51.067668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerStarted","Data":"59ab9b1e597740b613e66cf62a9e667c6878ecbd870a54a1d5d29d0221b7eb6e"} Jan 26 11:36:51 crc kubenswrapper[4867]: I0126 11:36:51.085111 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2kgmw" podStartSLOduration=4.103577071 podStartE2EDuration="43.085086743s" podCreationTimestamp="2026-01-26 11:36:08 +0000 UTC" firstStartedPulling="2026-01-26 11:36:10.657917483 +0000 UTC m=+1120.356492383" lastFinishedPulling="2026-01-26 11:36:49.639427145 +0000 UTC m=+1159.338002055" observedRunningTime="2026-01-26 11:36:51.078930918 +0000 UTC m=+1160.777505818" watchObservedRunningTime="2026-01-26 11:36:51.085086743 +0000 UTC m=+1160.783661673" Jan 26 11:36:51 crc kubenswrapper[4867]: I0126 11:36:51.477506 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:36:51 crc kubenswrapper[4867]: W0126 11:36:51.485461 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfc017b6_886f_48d3_8f1e_cef59e587503.slice/crio-10e5b02008d86a9a8053564f90665fce105450d0e3e22d188f48a133e847b52c WatchSource:0}: Error finding container 10e5b02008d86a9a8053564f90665fce105450d0e3e22d188f48a133e847b52c: Status 404 returned error can't find the container with id 10e5b02008d86a9a8053564f90665fce105450d0e3e22d188f48a133e847b52c Jan 26 11:36:52 crc kubenswrapper[4867]: I0126 11:36:52.092707 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"195fa02f-5887-4d8e-a103-2261e65a9c96","Type":"ContainerStarted","Data":"a3dfe0d4d3358e53f49d7ad7fd7ccc7dfe0122a9ec36359a54e66b3f1225b275"} Jan 26 11:36:52 crc kubenswrapper[4867]: I0126 11:36:52.092761 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"195fa02f-5887-4d8e-a103-2261e65a9c96","Type":"ContainerStarted","Data":"bd174a2d49c1dceefc61602f0660fa50abc5a093fcb3a322989b01dc4d95daee"} Jan 26 11:36:52 crc kubenswrapper[4867]: I0126 11:36:52.095709 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dfc017b6-886f-48d3-8f1e-cef59e587503","Type":"ContainerStarted","Data":"9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a"} Jan 26 11:36:52 crc kubenswrapper[4867]: I0126 11:36:52.095983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dfc017b6-886f-48d3-8f1e-cef59e587503","Type":"ContainerStarted","Data":"10e5b02008d86a9a8053564f90665fce105450d0e3e22d188f48a133e847b52c"} Jan 26 11:36:52 crc kubenswrapper[4867]: I0126 11:36:52.450517 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:36:52 crc kubenswrapper[4867]: I0126 11:36:52.532143 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zrbq4"] Jan 26 11:36:52 crc kubenswrapper[4867]: I0126 11:36:52.532390 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerName="dnsmasq-dns" containerID="cri-o://3711234edf94027b98b9dfb5883f4924a14576bc7014ce66e5b4cfeacbe70b4d" gracePeriod=10 Jan 26 11:36:53 crc kubenswrapper[4867]: I0126 11:36:53.111551 4867 generic.go:334] "Generic (PLEG): container finished" podID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerID="3711234edf94027b98b9dfb5883f4924a14576bc7014ce66e5b4cfeacbe70b4d" exitCode=0 Jan 26 11:36:53 crc kubenswrapper[4867]: I0126 11:36:53.111599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" event={"ID":"f69c8f7d-7b7b-476a-989a-aee2eec1e5db","Type":"ContainerDied","Data":"3711234edf94027b98b9dfb5883f4924a14576bc7014ce66e5b4cfeacbe70b4d"} Jan 26 11:36:55 crc kubenswrapper[4867]: I0126 11:36:55.137967 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dfc017b6-886f-48d3-8f1e-cef59e587503","Type":"ContainerStarted","Data":"55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd"} Jan 26 11:36:55 crc kubenswrapper[4867]: I0126 11:36:55.141666 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"195fa02f-5887-4d8e-a103-2261e65a9c96","Type":"ContainerStarted","Data":"1ae14c522c74fcc84e09c808640dccc7ff8db79b5fdfc21ea954926cf317bc83"} Jan 26 11:36:55 crc kubenswrapper[4867]: I0126 11:36:55.159512 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.159494553 podStartE2EDuration="5.159494553s" podCreationTimestamp="2026-01-26 11:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:55.157572043 +0000 UTC m=+1164.856146963" watchObservedRunningTime="2026-01-26 11:36:55.159494553 +0000 UTC m=+1164.858069463" Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.898758 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.905258 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-sb\") pod \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.905324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-nb\") pod \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.905414 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-dns-svc\") pod \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.905649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f8kp\" (UniqueName: \"kubernetes.io/projected/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-kube-api-access-4f8kp\") pod \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.905673 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-config\") pod \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\" (UID: \"f69c8f7d-7b7b-476a-989a-aee2eec1e5db\") " Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.912169 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-kube-api-access-4f8kp" (OuterVolumeSpecName: "kube-api-access-4f8kp") pod "f69c8f7d-7b7b-476a-989a-aee2eec1e5db" (UID: "f69c8f7d-7b7b-476a-989a-aee2eec1e5db"). InnerVolumeSpecName "kube-api-access-4f8kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.934870 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.9348465919999995 podStartE2EDuration="7.934846592s" podCreationTimestamp="2026-01-26 11:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:36:55.179159298 +0000 UTC m=+1164.877734208" watchObservedRunningTime="2026-01-26 11:36:57.934846592 +0000 UTC m=+1167.633421512" Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.954538 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f69c8f7d-7b7b-476a-989a-aee2eec1e5db" (UID: "f69c8f7d-7b7b-476a-989a-aee2eec1e5db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.972115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f69c8f7d-7b7b-476a-989a-aee2eec1e5db" (UID: "f69c8f7d-7b7b-476a-989a-aee2eec1e5db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.973935 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f69c8f7d-7b7b-476a-989a-aee2eec1e5db" (UID: "f69c8f7d-7b7b-476a-989a-aee2eec1e5db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:57 crc kubenswrapper[4867]: I0126 11:36:57.978497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-config" (OuterVolumeSpecName: "config") pod "f69c8f7d-7b7b-476a-989a-aee2eec1e5db" (UID: "f69c8f7d-7b7b-476a-989a-aee2eec1e5db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.008066 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f8kp\" (UniqueName: \"kubernetes.io/projected/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-kube-api-access-4f8kp\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.008104 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.008116 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.008124 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.008131 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f69c8f7d-7b7b-476a-989a-aee2eec1e5db-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.192394 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" event={"ID":"f69c8f7d-7b7b-476a-989a-aee2eec1e5db","Type":"ContainerDied","Data":"91e57ae24f98e42889eda8b19a55e84dfed1ca601218fc3fa83c2c10cf0ccbdc"} Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.192452 4867 scope.go:117] "RemoveContainer" containerID="3711234edf94027b98b9dfb5883f4924a14576bc7014ce66e5b4cfeacbe70b4d" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.192619 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.229451 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zrbq4"] Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.238314 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zrbq4"] Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.574322 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" path="/var/lib/kubelet/pods/f69c8f7d-7b7b-476a-989a-aee2eec1e5db/volumes" Jan 26 11:36:58 crc kubenswrapper[4867]: I0126 11:36:58.924299 4867 scope.go:117] "RemoveContainer" containerID="728cd5335638a80100234d8d588428eb897f11a8d6605fb408a28f3d61d15d8d" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.217677 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerStarted","Data":"8924abddc1c5789e6d98a833601f462ec5f0183dec2c8f4d01aea54627186117"} Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.217849 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-central-agent" containerID="cri-o://371be553f53d8dfe3839994403af1d5cd6b544c4a15d5b7863c94b62b479c5bb" gracePeriod=30 Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.218041 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="proxy-httpd" containerID="cri-o://8924abddc1c5789e6d98a833601f462ec5f0183dec2c8f4d01aea54627186117" gracePeriod=30 Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.217937 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-notification-agent" containerID="cri-o://de0b6fd3cca4db9816920ab91a17889dfa377b8d7f1cdea01c1ada0a80fea17b" gracePeriod=30 Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.218177 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.217932 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="sg-core" containerID="cri-o://59ab9b1e597740b613e66cf62a9e667c6878ecbd870a54a1d5d29d0221b7eb6e" gracePeriod=30 Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.244939 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.063301296 podStartE2EDuration="52.244921446s" podCreationTimestamp="2026-01-26 11:36:08 +0000 UTC" firstStartedPulling="2026-01-26 11:36:10.814729131 +0000 UTC m=+1120.513304041" lastFinishedPulling="2026-01-26 11:36:58.996349261 +0000 UTC m=+1168.694924191" observedRunningTime="2026-01-26 11:37:00.244014661 +0000 UTC m=+1169.942589571" watchObservedRunningTime="2026-01-26 11:37:00.244921446 +0000 UTC m=+1169.943496356" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.413565 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.413616 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.443739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.457720 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.844188 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.844848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.870706 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:00 crc kubenswrapper[4867]: I0126 11:37:00.882804 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229507 4867 generic.go:334] "Generic (PLEG): container finished" podID="b588da78-7e07-438f-9612-e600ca38ab04" containerID="8924abddc1c5789e6d98a833601f462ec5f0183dec2c8f4d01aea54627186117" exitCode=0 Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229538 4867 generic.go:334] "Generic (PLEG): container finished" podID="b588da78-7e07-438f-9612-e600ca38ab04" containerID="59ab9b1e597740b613e66cf62a9e667c6878ecbd870a54a1d5d29d0221b7eb6e" exitCode=2 Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229546 4867 generic.go:334] "Generic (PLEG): container finished" podID="b588da78-7e07-438f-9612-e600ca38ab04" containerID="de0b6fd3cca4db9816920ab91a17889dfa377b8d7f1cdea01c1ada0a80fea17b" exitCode=0 Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229555 4867 generic.go:334] "Generic (PLEG): container finished" podID="b588da78-7e07-438f-9612-e600ca38ab04" containerID="371be553f53d8dfe3839994403af1d5cd6b544c4a15d5b7863c94b62b479c5bb" exitCode=0 Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229554 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerDied","Data":"8924abddc1c5789e6d98a833601f462ec5f0183dec2c8f4d01aea54627186117"} Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerDied","Data":"59ab9b1e597740b613e66cf62a9e667c6878ecbd870a54a1d5d29d0221b7eb6e"} Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerDied","Data":"de0b6fd3cca4db9816920ab91a17889dfa377b8d7f1cdea01c1ada0a80fea17b"} Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.229641 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerDied","Data":"371be553f53d8dfe3839994403af1d5cd6b544c4a15d5b7863c94b62b479c5bb"} Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.230489 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.230528 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.230537 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.230549 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.407670 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.428598 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-zrbq4" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.566816 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-run-httpd\") pod \"b588da78-7e07-438f-9612-e600ca38ab04\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.566884 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-scripts\") pod \"b588da78-7e07-438f-9612-e600ca38ab04\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.566979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-log-httpd\") pod \"b588da78-7e07-438f-9612-e600ca38ab04\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.567071 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-sg-core-conf-yaml\") pod \"b588da78-7e07-438f-9612-e600ca38ab04\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.567111 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjzkb\" (UniqueName: \"kubernetes.io/projected/b588da78-7e07-438f-9612-e600ca38ab04-kube-api-access-pjzkb\") pod \"b588da78-7e07-438f-9612-e600ca38ab04\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.567138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-combined-ca-bundle\") pod \"b588da78-7e07-438f-9612-e600ca38ab04\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.567167 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-config-data\") pod \"b588da78-7e07-438f-9612-e600ca38ab04\" (UID: \"b588da78-7e07-438f-9612-e600ca38ab04\") " Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.569129 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b588da78-7e07-438f-9612-e600ca38ab04" (UID: "b588da78-7e07-438f-9612-e600ca38ab04"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.569412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b588da78-7e07-438f-9612-e600ca38ab04" (UID: "b588da78-7e07-438f-9612-e600ca38ab04"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.575087 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-scripts" (OuterVolumeSpecName: "scripts") pod "b588da78-7e07-438f-9612-e600ca38ab04" (UID: "b588da78-7e07-438f-9612-e600ca38ab04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.581322 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b588da78-7e07-438f-9612-e600ca38ab04-kube-api-access-pjzkb" (OuterVolumeSpecName: "kube-api-access-pjzkb") pod "b588da78-7e07-438f-9612-e600ca38ab04" (UID: "b588da78-7e07-438f-9612-e600ca38ab04"). InnerVolumeSpecName "kube-api-access-pjzkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.621696 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b588da78-7e07-438f-9612-e600ca38ab04" (UID: "b588da78-7e07-438f-9612-e600ca38ab04"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.669388 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.669431 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.669445 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjzkb\" (UniqueName: \"kubernetes.io/projected/b588da78-7e07-438f-9612-e600ca38ab04-kube-api-access-pjzkb\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.669458 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b588da78-7e07-438f-9612-e600ca38ab04-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.669467 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.691728 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b588da78-7e07-438f-9612-e600ca38ab04" (UID: "b588da78-7e07-438f-9612-e600ca38ab04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.696469 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-config-data" (OuterVolumeSpecName: "config-data") pod "b588da78-7e07-438f-9612-e600ca38ab04" (UID: "b588da78-7e07-438f-9612-e600ca38ab04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.771141 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:01 crc kubenswrapper[4867]: I0126 11:37:01.771173 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b588da78-7e07-438f-9612-e600ca38ab04-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.242232 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.242260 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b588da78-7e07-438f-9612-e600ca38ab04","Type":"ContainerDied","Data":"5367bd45b9f46b1df8afeeb7840ac87b6e9809c72f79068cafbb0685a0e544ba"} Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.242326 4867 scope.go:117] "RemoveContainer" containerID="8924abddc1c5789e6d98a833601f462ec5f0183dec2c8f4d01aea54627186117" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.243872 4867 generic.go:334] "Generic (PLEG): container finished" podID="0ee786d6-3c88-4374-a028-3a3c83b30fec" containerID="3252b7222a6b1e64e2f23fc987ca7f51f1a67b89435f0ce9db433d0b986e0667" exitCode=0 Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.243906 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v7xht" event={"ID":"0ee786d6-3c88-4374-a028-3a3c83b30fec","Type":"ContainerDied","Data":"3252b7222a6b1e64e2f23fc987ca7f51f1a67b89435f0ce9db433d0b986e0667"} Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.269515 4867 scope.go:117] "RemoveContainer" containerID="59ab9b1e597740b613e66cf62a9e667c6878ecbd870a54a1d5d29d0221b7eb6e" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.296776 4867 scope.go:117] "RemoveContainer" containerID="de0b6fd3cca4db9816920ab91a17889dfa377b8d7f1cdea01c1ada0a80fea17b" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.324306 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.348503 4867 scope.go:117] "RemoveContainer" containerID="371be553f53d8dfe3839994403af1d5cd6b544c4a15d5b7863c94b62b479c5bb" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.355817 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.372290 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:02 crc kubenswrapper[4867]: E0126 11:37:02.372731 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="proxy-httpd" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.372755 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="proxy-httpd" Jan 26 11:37:02 crc kubenswrapper[4867]: E0126 11:37:02.372789 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="sg-core" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.372798 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="sg-core" Jan 26 11:37:02 crc kubenswrapper[4867]: E0126 11:37:02.372810 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-notification-agent" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.372818 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-notification-agent" Jan 26 11:37:02 crc kubenswrapper[4867]: E0126 11:37:02.372829 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-central-agent" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.372837 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-central-agent" Jan 26 11:37:02 crc kubenswrapper[4867]: E0126 11:37:02.372852 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerName="init" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.372861 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerName="init" Jan 26 11:37:02 crc kubenswrapper[4867]: E0126 11:37:02.372875 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerName="dnsmasq-dns" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.372883 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerName="dnsmasq-dns" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.373094 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="sg-core" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.373115 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-notification-agent" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.373137 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="ceilometer-central-agent" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.373151 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b588da78-7e07-438f-9612-e600ca38ab04" containerName="proxy-httpd" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.373165 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69c8f7d-7b7b-476a-989a-aee2eec1e5db" containerName="dnsmasq-dns" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.375169 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.378724 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.380497 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.382054 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.485063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-scripts\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.485118 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-log-httpd\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.485184 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kknc6\" (UniqueName: \"kubernetes.io/projected/b2643e95-59cb-42a2-982e-96a7d732e5e4-kube-api-access-kknc6\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.485338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.485411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-config-data\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.485444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.485687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-run-httpd\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.581686 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b588da78-7e07-438f-9612-e600ca38ab04" path="/var/lib/kubelet/pods/b588da78-7e07-438f-9612-e600ca38ab04/volumes" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.587193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kknc6\" (UniqueName: \"kubernetes.io/projected/b2643e95-59cb-42a2-982e-96a7d732e5e4-kube-api-access-kknc6\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.587277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.587306 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-config-data\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.587324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.587410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-run-httpd\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.587477 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-scripts\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.587500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-log-httpd\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.588189 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-log-httpd\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.590546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-run-httpd\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.595067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-config-data\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.596311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-scripts\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.601963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.604378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.607370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kknc6\" (UniqueName: \"kubernetes.io/projected/b2643e95-59cb-42a2-982e-96a7d732e5e4-kube-api-access-kknc6\") pod \"ceilometer-0\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " pod="openstack/ceilometer-0" Jan 26 11:37:02 crc kubenswrapper[4867]: I0126 11:37:02.721416 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.158568 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.169024 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.230414 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.260683 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerStarted","Data":"c96b3bb0dd6c5bbfc3ebec2ce18e6f8ce55cb112a0555779ce32dadd93d91905"} Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.260758 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.260771 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.269764 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.274025 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.810754 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v7xht" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.926887 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-combined-ca-bundle\") pod \"0ee786d6-3c88-4374-a028-3a3c83b30fec\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.927063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-db-sync-config-data\") pod \"0ee786d6-3c88-4374-a028-3a3c83b30fec\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.927160 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l22rt\" (UniqueName: \"kubernetes.io/projected/0ee786d6-3c88-4374-a028-3a3c83b30fec-kube-api-access-l22rt\") pod \"0ee786d6-3c88-4374-a028-3a3c83b30fec\" (UID: \"0ee786d6-3c88-4374-a028-3a3c83b30fec\") " Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.949543 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee786d6-3c88-4374-a028-3a3c83b30fec-kube-api-access-l22rt" (OuterVolumeSpecName: "kube-api-access-l22rt") pod "0ee786d6-3c88-4374-a028-3a3c83b30fec" (UID: "0ee786d6-3c88-4374-a028-3a3c83b30fec"). InnerVolumeSpecName "kube-api-access-l22rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.956617 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ee786d6-3c88-4374-a028-3a3c83b30fec" (UID: "0ee786d6-3c88-4374-a028-3a3c83b30fec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:03 crc kubenswrapper[4867]: I0126 11:37:03.960378 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0ee786d6-3c88-4374-a028-3a3c83b30fec" (UID: "0ee786d6-3c88-4374-a028-3a3c83b30fec"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.029536 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l22rt\" (UniqueName: \"kubernetes.io/projected/0ee786d6-3c88-4374-a028-3a3c83b30fec-kube-api-access-l22rt\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.029592 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.029604 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ee786d6-3c88-4374-a028-3a3c83b30fec-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.272137 4867 generic.go:334] "Generic (PLEG): container finished" podID="75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" containerID="09ce249d9ce50194ab4edc8047525c8a71645522228c8d402991237724e02ed1" exitCode=0 Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.272715 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m72k2" event={"ID":"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534","Type":"ContainerDied","Data":"09ce249d9ce50194ab4edc8047525c8a71645522228c8d402991237724e02ed1"} Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.278584 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerStarted","Data":"3176feeb6409c9c175520fd5f96b008015c911ba0fb09f7821c6dc2f0fc7ca48"} Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.284666 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v7xht" event={"ID":"0ee786d6-3c88-4374-a028-3a3c83b30fec","Type":"ContainerDied","Data":"886e9c3d275ab2482f23c8480433a09edc62f94324e6a7822512887f1f748c50"} Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.284744 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="886e9c3d275ab2482f23c8480433a09edc62f94324e6a7822512887f1f748c50" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.284751 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v7xht" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.557131 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6db8644655-m8sn6"] Jan 26 11:37:04 crc kubenswrapper[4867]: E0126 11:37:04.558043 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee786d6-3c88-4374-a028-3a3c83b30fec" containerName="barbican-db-sync" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.558070 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee786d6-3c88-4374-a028-3a3c83b30fec" containerName="barbican-db-sync" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.558283 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee786d6-3c88-4374-a028-3a3c83b30fec" containerName="barbican-db-sync" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.559434 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.570112 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.570503 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.570683 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cb22d" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.607193 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6db8644655-m8sn6"] Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.644199 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-config-data-custom\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.644700 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-combined-ca-bundle\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.644805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7869m\" (UniqueName: \"kubernetes.io/projected/f568d082-7794-4f60-b78e-bff0b6b6356f-kube-api-access-7869m\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.644837 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-config-data\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.644876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568d082-7794-4f60-b78e-bff0b6b6356f-logs\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.665720 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5fc6c76976-2w9dm"] Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.695378 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.698520 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.701698 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fc6c76976-2w9dm"] Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.747925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7869m\" (UniqueName: \"kubernetes.io/projected/f568d082-7794-4f60-b78e-bff0b6b6356f-kube-api-access-7869m\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.748011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-config-data\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.748059 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568d082-7794-4f60-b78e-bff0b6b6356f-logs\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.748126 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-config-data-custom\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.748160 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-combined-ca-bundle\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.750813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568d082-7794-4f60-b78e-bff0b6b6356f-logs\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.767662 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-combined-ca-bundle\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.783357 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-config-data-custom\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.787965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568d082-7794-4f60-b78e-bff0b6b6356f-config-data\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.800623 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7snlw"] Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.802933 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.806363 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7869m\" (UniqueName: \"kubernetes.io/projected/f568d082-7794-4f60-b78e-bff0b6b6356f-kube-api-access-7869m\") pod \"barbican-worker-6db8644655-m8sn6\" (UID: \"f568d082-7794-4f60-b78e-bff0b6b6356f\") " pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.830095 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7snlw"] Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.857652 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-config-data-custom\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.857810 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-combined-ca-bundle\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.857846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-config-data\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.857892 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkbxd\" (UniqueName: \"kubernetes.io/projected/9a534f97-8d45-4418-af77-5e19e2013a0b-kube-api-access-hkbxd\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.859182 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a534f97-8d45-4418-af77-5e19e2013a0b-logs\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.905500 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6db8644655-m8sn6" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965553 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-config\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965602 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-config-data-custom\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965649 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqvf\" (UniqueName: \"kubernetes.io/projected/709b20e3-bfba-4bc0-b5e5-d5a99075091d-kube-api-access-rcqvf\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965724 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-combined-ca-bundle\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-config-data\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965771 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkbxd\" (UniqueName: \"kubernetes.io/projected/9a534f97-8d45-4418-af77-5e19e2013a0b-kube-api-access-hkbxd\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a534f97-8d45-4418-af77-5e19e2013a0b-logs\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965824 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.965876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.975210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a534f97-8d45-4418-af77-5e19e2013a0b-logs\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.976081 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-config-data-custom\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:04 crc kubenswrapper[4867]: I0126 11:37:04.984800 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-config-data\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.000415 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-fd45cdb8b-tgbqw"] Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.001864 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.002529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a534f97-8d45-4418-af77-5e19e2013a0b-combined-ca-bundle\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.009722 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.014927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkbxd\" (UniqueName: \"kubernetes.io/projected/9a534f97-8d45-4418-af77-5e19e2013a0b-kube-api-access-hkbxd\") pod \"barbican-keystone-listener-5fc6c76976-2w9dm\" (UID: \"9a534f97-8d45-4418-af77-5e19e2013a0b\") " pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.035812 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fd45cdb8b-tgbqw"] Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.042086 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.075915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqvf\" (UniqueName: \"kubernetes.io/projected/709b20e3-bfba-4bc0-b5e5-d5a99075091d-kube-api-access-rcqvf\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.076076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.076272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.076301 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.076345 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.076396 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-config\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.077589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.077773 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-config\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.077854 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.078336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.079190 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.102590 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqvf\" (UniqueName: \"kubernetes.io/projected/709b20e3-bfba-4bc0-b5e5-d5a99075091d-kube-api-access-rcqvf\") pod \"dnsmasq-dns-7c67bffd47-7snlw\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.178206 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgp8m\" (UniqueName: \"kubernetes.io/projected/8434703b-0a5f-49f0-8877-2048d276f8ff-kube-api-access-mgp8m\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.178708 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data-custom\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.178764 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.178791 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-combined-ca-bundle\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.178995 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8434703b-0a5f-49f0-8877-2048d276f8ff-logs\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.199005 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.285632 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8434703b-0a5f-49f0-8877-2048d276f8ff-logs\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.285805 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgp8m\" (UniqueName: \"kubernetes.io/projected/8434703b-0a5f-49f0-8877-2048d276f8ff-kube-api-access-mgp8m\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.285868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data-custom\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.285926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.285946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-combined-ca-bundle\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.287370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8434703b-0a5f-49f0-8877-2048d276f8ff-logs\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.293601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data-custom\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.302956 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-combined-ca-bundle\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.304162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.311812 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgp8m\" (UniqueName: \"kubernetes.io/projected/8434703b-0a5f-49f0-8877-2048d276f8ff-kube-api-access-mgp8m\") pod \"barbican-api-fd45cdb8b-tgbqw\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.317655 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerStarted","Data":"f0c7e3e53676b600a0f8387db3e43d43aaf869729a99557deb943b08f2fdfd33"} Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.426670 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.461028 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6db8644655-m8sn6"] Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.641525 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fc6c76976-2w9dm"] Jan 26 11:37:05 crc kubenswrapper[4867]: W0126 11:37:05.650825 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a534f97_8d45_4418_af77_5e19e2013a0b.slice/crio-123d41195a1bca815e7951a845122b5aee31b6141fe86b0430740500b4b7edea WatchSource:0}: Error finding container 123d41195a1bca815e7951a845122b5aee31b6141fe86b0430740500b4b7edea: Status 404 returned error can't find the container with id 123d41195a1bca815e7951a845122b5aee31b6141fe86b0430740500b4b7edea Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.793378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7snlw"] Jan 26 11:37:05 crc kubenswrapper[4867]: I0126 11:37:05.882444 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m72k2" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.013297 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-combined-ca-bundle\") pod \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.014063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-config\") pod \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.014265 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8v9n\" (UniqueName: \"kubernetes.io/projected/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-kube-api-access-x8v9n\") pod \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\" (UID: \"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534\") " Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.026308 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-kube-api-access-x8v9n" (OuterVolumeSpecName: "kube-api-access-x8v9n") pod "75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" (UID: "75e847de-1c0c-4ac3-b7ff-c41bfa7a6534"). InnerVolumeSpecName "kube-api-access-x8v9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.047501 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" (UID: "75e847de-1c0c-4ac3-b7ff-c41bfa7a6534"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.057348 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-config" (OuterVolumeSpecName: "config") pod "75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" (UID: "75e847de-1c0c-4ac3-b7ff-c41bfa7a6534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.118483 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.118521 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.118532 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8v9n\" (UniqueName: \"kubernetes.io/projected/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534-kube-api-access-x8v9n\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.118565 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fd45cdb8b-tgbqw"] Jan 26 11:37:06 crc kubenswrapper[4867]: W0126 11:37:06.123287 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8434703b_0a5f_49f0_8877_2048d276f8ff.slice/crio-8215f1850fd8ddecdea01cb82cafbf02d74125cdac8c207406e77d22e4e62156 WatchSource:0}: Error finding container 8215f1850fd8ddecdea01cb82cafbf02d74125cdac8c207406e77d22e4e62156: Status 404 returned error can't find the container with id 8215f1850fd8ddecdea01cb82cafbf02d74125cdac8c207406e77d22e4e62156 Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.294542 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.294592 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.294625 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.295312 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"510e7b8815f2e10ccb07bd14d3cace2ddac464c7ed9719497ae9e906b65ef061"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.295364 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://510e7b8815f2e10ccb07bd14d3cace2ddac464c7ed9719497ae9e906b65ef061" gracePeriod=600 Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.332339 4867 generic.go:334] "Generic (PLEG): container finished" podID="709b20e3-bfba-4bc0-b5e5-d5a99075091d" containerID="76755670085bb34b857a5fb8d3996fb4dc3213326c01652f4fbe1a3f441d6066" exitCode=0 Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.333080 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" event={"ID":"709b20e3-bfba-4bc0-b5e5-d5a99075091d","Type":"ContainerDied","Data":"76755670085bb34b857a5fb8d3996fb4dc3213326c01652f4fbe1a3f441d6066"} Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.333111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" event={"ID":"709b20e3-bfba-4bc0-b5e5-d5a99075091d","Type":"ContainerStarted","Data":"486756c3374961b133782501d1a70d2d168abdde0094df28e2fc300dbc8a10f9"} Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.337839 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m72k2" event={"ID":"75e847de-1c0c-4ac3-b7ff-c41bfa7a6534","Type":"ContainerDied","Data":"7c047cbe90862887dd0cf6ad99db2b617c5afb37325fd02e965918a054e48ea3"} Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.338012 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c047cbe90862887dd0cf6ad99db2b617c5afb37325fd02e965918a054e48ea3" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.338121 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m72k2" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.359209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd45cdb8b-tgbqw" event={"ID":"8434703b-0a5f-49f0-8877-2048d276f8ff","Type":"ContainerStarted","Data":"8215f1850fd8ddecdea01cb82cafbf02d74125cdac8c207406e77d22e4e62156"} Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.389678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6db8644655-m8sn6" event={"ID":"f568d082-7794-4f60-b78e-bff0b6b6356f","Type":"ContainerStarted","Data":"0dd16c1e33d0880fd58acaef3158e352666eb69f03830642419d4f38ca15562e"} Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.407047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" event={"ID":"9a534f97-8d45-4418-af77-5e19e2013a0b","Type":"ContainerStarted","Data":"123d41195a1bca815e7951a845122b5aee31b6141fe86b0430740500b4b7edea"} Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.653074 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7snlw"] Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.679210 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z76z5"] Jan 26 11:37:06 crc kubenswrapper[4867]: E0126 11:37:06.683392 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" containerName="neutron-db-sync" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.683438 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" containerName="neutron-db-sync" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.683780 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" containerName="neutron-db-sync" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.684796 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.709803 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z76z5"] Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.811852 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bc467f664-6zfb4"] Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.822007 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.827151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.827502 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ndxbr" Jan 26 11:37:06 crc kubenswrapper[4867]: E0126 11:37:06.827661 4867 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 26 11:37:06 crc kubenswrapper[4867]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/709b20e3-bfba-4bc0-b5e5-d5a99075091d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:37:06 crc kubenswrapper[4867]: > podSandboxID="486756c3374961b133782501d1a70d2d168abdde0094df28e2fc300dbc8a10f9" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.827754 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 11:37:06 crc kubenswrapper[4867]: E0126 11:37:06.827786 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 11:37:06 crc kubenswrapper[4867]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n574hbch97h666hbbh5fch555h5ddh649h699hf4h9ch6h699h55h5b7h5b9h5d5hf6h686h5cfh599h594h559h645h699h55h5f8h54ch555h55bh655q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcqvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c67bffd47-7snlw_openstack(709b20e3-bfba-4bc0-b5e5-d5a99075091d): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/709b20e3-bfba-4bc0-b5e5-d5a99075091d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 26 11:37:06 crc kubenswrapper[4867]: > logger="UnhandledError" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.827910 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 11:37:06 crc kubenswrapper[4867]: E0126 11:37:06.837051 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/709b20e3-bfba-4bc0-b5e5-d5a99075091d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" podUID="709b20e3-bfba-4bc0-b5e5-d5a99075091d" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.847136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.847188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-config\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.847276 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.847297 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.847500 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.847545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhf7w\" (UniqueName: \"kubernetes.io/projected/595e878d-5361-4fef-81d8-ca3d00e79685-kube-api-access-dhf7w\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.854017 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bc467f664-6zfb4"] Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.949193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950320 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6zd8\" (UniqueName: \"kubernetes.io/projected/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-kube-api-access-m6zd8\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950368 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-config\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950389 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-ovndb-tls-certs\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950415 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950432 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhf7w\" (UniqueName: \"kubernetes.io/projected/595e878d-5361-4fef-81d8-ca3d00e79685-kube-api-access-dhf7w\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950494 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-config\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.950213 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.951194 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.951255 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-httpd-config\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.951322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-combined-ca-bundle\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.951445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.951516 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.951857 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-config\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:06 crc kubenswrapper[4867]: I0126 11:37:06.974309 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhf7w\" (UniqueName: \"kubernetes.io/projected/595e878d-5361-4fef-81d8-ca3d00e79685-kube-api-access-dhf7w\") pod \"dnsmasq-dns-848cf88cfc-z76z5\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.021769 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.053302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-httpd-config\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.053365 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-combined-ca-bundle\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.053413 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6zd8\" (UniqueName: \"kubernetes.io/projected/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-kube-api-access-m6zd8\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.053454 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-config\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.053476 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-ovndb-tls-certs\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.059436 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-httpd-config\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.061505 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-combined-ca-bundle\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.064927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-config\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.066652 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-ovndb-tls-certs\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.073647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6zd8\" (UniqueName: \"kubernetes.io/projected/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-kube-api-access-m6zd8\") pod \"neutron-7bc467f664-6zfb4\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.188786 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.317719 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z76z5"] Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.417412 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" event={"ID":"595e878d-5361-4fef-81d8-ca3d00e79685","Type":"ContainerStarted","Data":"efc9a943fd8049363f1868997c15edf0c4677d163fd6146b7ab782986babab89"} Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.419303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerStarted","Data":"dd823c865671eed3b9056413ff43d7b65162230282ac78a467be7e9cfae7dccb"} Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.804437 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.969702 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-config\") pod \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.969866 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-nb\") pod \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.969898 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-sb\") pod \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.969929 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcqvf\" (UniqueName: \"kubernetes.io/projected/709b20e3-bfba-4bc0-b5e5-d5a99075091d-kube-api-access-rcqvf\") pod \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.969961 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-svc\") pod \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.969993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-swift-storage-0\") pod \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\" (UID: \"709b20e3-bfba-4bc0-b5e5-d5a99075091d\") " Jan 26 11:37:07 crc kubenswrapper[4867]: I0126 11:37:07.976027 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709b20e3-bfba-4bc0-b5e5-d5a99075091d-kube-api-access-rcqvf" (OuterVolumeSpecName: "kube-api-access-rcqvf") pod "709b20e3-bfba-4bc0-b5e5-d5a99075091d" (UID: "709b20e3-bfba-4bc0-b5e5-d5a99075091d"). InnerVolumeSpecName "kube-api-access-rcqvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.019853 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-config" (OuterVolumeSpecName: "config") pod "709b20e3-bfba-4bc0-b5e5-d5a99075091d" (UID: "709b20e3-bfba-4bc0-b5e5-d5a99075091d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.035019 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "709b20e3-bfba-4bc0-b5e5-d5a99075091d" (UID: "709b20e3-bfba-4bc0-b5e5-d5a99075091d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.035920 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "709b20e3-bfba-4bc0-b5e5-d5a99075091d" (UID: "709b20e3-bfba-4bc0-b5e5-d5a99075091d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.036347 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "709b20e3-bfba-4bc0-b5e5-d5a99075091d" (UID: "709b20e3-bfba-4bc0-b5e5-d5a99075091d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.041812 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "709b20e3-bfba-4bc0-b5e5-d5a99075091d" (UID: "709b20e3-bfba-4bc0-b5e5-d5a99075091d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.073805 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.073857 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.073870 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.073881 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcqvf\" (UniqueName: \"kubernetes.io/projected/709b20e3-bfba-4bc0-b5e5-d5a99075091d-kube-api-access-rcqvf\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.073892 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.073906 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709b20e3-bfba-4bc0-b5e5-d5a99075091d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.223266 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bc467f664-6zfb4"] Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.432693 4867 generic.go:334] "Generic (PLEG): container finished" podID="595e878d-5361-4fef-81d8-ca3d00e79685" containerID="2097fd13ffe9f9966d57c9a6338eaa404c2e44ff41a6f52ca34b2c4f71099a74" exitCode=0 Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.433037 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" event={"ID":"595e878d-5361-4fef-81d8-ca3d00e79685","Type":"ContainerDied","Data":"2097fd13ffe9f9966d57c9a6338eaa404c2e44ff41a6f52ca34b2c4f71099a74"} Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.437338 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="510e7b8815f2e10ccb07bd14d3cace2ddac464c7ed9719497ae9e906b65ef061" exitCode=0 Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.437448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"510e7b8815f2e10ccb07bd14d3cace2ddac464c7ed9719497ae9e906b65ef061"} Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.437520 4867 scope.go:117] "RemoveContainer" containerID="f4568ef927141a7a2944fe130fff11fd99ada292de5ff857f1ccce612a5d941d" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.440547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" event={"ID":"709b20e3-bfba-4bc0-b5e5-d5a99075091d","Type":"ContainerDied","Data":"486756c3374961b133782501d1a70d2d168abdde0094df28e2fc300dbc8a10f9"} Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.440649 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-7snlw" Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.476538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd45cdb8b-tgbqw" event={"ID":"8434703b-0a5f-49f0-8877-2048d276f8ff","Type":"ContainerStarted","Data":"27d71e31cf8c65c6dbc4a31b200b8d165237f01a1f12d8b6878de8c1143c58a3"} Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.481049 4867 generic.go:334] "Generic (PLEG): container finished" podID="d28fe2ce-f40e-4f37-9d27-57d14376fc5d" containerID="56eb17817f1a9ae4d04be82a79ad6e5d9ee52e4a1df45b9925caa83b5e230843" exitCode=0 Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.481107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kgmw" event={"ID":"d28fe2ce-f40e-4f37-9d27-57d14376fc5d","Type":"ContainerDied","Data":"56eb17817f1a9ae4d04be82a79ad6e5d9ee52e4a1df45b9925caa83b5e230843"} Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.534379 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7snlw"] Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.541960 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-7snlw"] Jan 26 11:37:08 crc kubenswrapper[4867]: I0126 11:37:08.585165 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709b20e3-bfba-4bc0-b5e5-d5a99075091d" path="/var/lib/kubelet/pods/709b20e3-bfba-4bc0-b5e5-d5a99075091d/volumes" Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.491978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc467f664-6zfb4" event={"ID":"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa","Type":"ContainerStarted","Data":"6d090696888377934a5a07ff59f0258e1e6a8dbff8e2207b576311041a6ad02c"} Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.907190 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-647b685f9-49zj6"] Jan 26 11:37:09 crc kubenswrapper[4867]: E0126 11:37:09.908105 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709b20e3-bfba-4bc0-b5e5-d5a99075091d" containerName="init" Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.908129 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="709b20e3-bfba-4bc0-b5e5-d5a99075091d" containerName="init" Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.908386 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="709b20e3-bfba-4bc0-b5e5-d5a99075091d" containerName="init" Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.909569 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.913317 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.913450 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 11:37:09 crc kubenswrapper[4867]: I0126 11:37:09.926291 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-647b685f9-49zj6"] Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.022524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-internal-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.022616 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-httpd-config\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.022657 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-ovndb-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.022687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-public-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.022725 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-config\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.022746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-combined-ca-bundle\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.022782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rwp\" (UniqueName: \"kubernetes.io/projected/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-kube-api-access-x6rwp\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.124550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-ovndb-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.124597 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-public-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.124620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-config\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.124642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-combined-ca-bundle\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.124671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rwp\" (UniqueName: \"kubernetes.io/projected/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-kube-api-access-x6rwp\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.124761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-internal-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.124809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-httpd-config\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.130955 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-config\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.131311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-public-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.131692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-internal-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.144795 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-combined-ca-bundle\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.146014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rwp\" (UniqueName: \"kubernetes.io/projected/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-kube-api-access-x6rwp\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.147236 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-httpd-config\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.148500 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9-ovndb-tls-certs\") pod \"neutron-647b685f9-49zj6\" (UID: \"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9\") " pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.228474 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.344617 4867 scope.go:117] "RemoveContainer" containerID="76755670085bb34b857a5fb8d3996fb4dc3213326c01652f4fbe1a3f441d6066" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.441552 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.519683 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kgmw" event={"ID":"d28fe2ce-f40e-4f37-9d27-57d14376fc5d","Type":"ContainerDied","Data":"4662d7de3602cec63da0cc992541f679d5537213f3fd5d5269e05c6727c601c2"} Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.519726 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4662d7de3602cec63da0cc992541f679d5537213f3fd5d5269e05c6727c601c2" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.519781 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kgmw" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.531199 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-scripts\") pod \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.531309 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jqsc\" (UniqueName: \"kubernetes.io/projected/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-kube-api-access-5jqsc\") pod \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.531360 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-config-data\") pod \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.531378 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-etc-machine-id\") pod \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.531484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-combined-ca-bundle\") pod \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.532284 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d28fe2ce-f40e-4f37-9d27-57d14376fc5d" (UID: "d28fe2ce-f40e-4f37-9d27-57d14376fc5d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.531522 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-db-sync-config-data\") pod \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\" (UID: \"d28fe2ce-f40e-4f37-9d27-57d14376fc5d\") " Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.533019 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.537407 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-scripts" (OuterVolumeSpecName: "scripts") pod "d28fe2ce-f40e-4f37-9d27-57d14376fc5d" (UID: "d28fe2ce-f40e-4f37-9d27-57d14376fc5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.540524 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-kube-api-access-5jqsc" (OuterVolumeSpecName: "kube-api-access-5jqsc") pod "d28fe2ce-f40e-4f37-9d27-57d14376fc5d" (UID: "d28fe2ce-f40e-4f37-9d27-57d14376fc5d"). InnerVolumeSpecName "kube-api-access-5jqsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.544254 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d28fe2ce-f40e-4f37-9d27-57d14376fc5d" (UID: "d28fe2ce-f40e-4f37-9d27-57d14376fc5d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.581960 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d28fe2ce-f40e-4f37-9d27-57d14376fc5d" (UID: "d28fe2ce-f40e-4f37-9d27-57d14376fc5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.634520 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.634553 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jqsc\" (UniqueName: \"kubernetes.io/projected/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-kube-api-access-5jqsc\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.634569 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.634581 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.650024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-config-data" (OuterVolumeSpecName: "config-data") pod "d28fe2ce-f40e-4f37-9d27-57d14376fc5d" (UID: "d28fe2ce-f40e-4f37-9d27-57d14376fc5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.742457 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28fe2ce-f40e-4f37-9d27-57d14376fc5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.921341 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:10 crc kubenswrapper[4867]: E0126 11:37:10.922816 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28fe2ce-f40e-4f37-9d27-57d14376fc5d" containerName="cinder-db-sync" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.922843 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28fe2ce-f40e-4f37-9d27-57d14376fc5d" containerName="cinder-db-sync" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.923098 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28fe2ce-f40e-4f37-9d27-57d14376fc5d" containerName="cinder-db-sync" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.924079 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.933718 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.933955 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6csr9" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.934060 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.934628 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.962145 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.962194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-scripts\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.962236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc2a642-41e4-4162-aa08-1cecd958b32c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.962473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.962530 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.962600 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5kh4\" (UniqueName: \"kubernetes.io/projected/edc2a642-41e4-4162-aa08-1cecd958b32c-kube-api-access-z5kh4\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.962711 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:10 crc kubenswrapper[4867]: I0126 11:37:10.991148 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z76z5"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.019674 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-klbvt"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.021565 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.036831 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-klbvt"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.064113 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.064159 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.064207 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kh4\" (UniqueName: \"kubernetes.io/projected/edc2a642-41e4-4162-aa08-1cecd958b32c-kube-api-access-z5kh4\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.064266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.064357 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-scripts\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.064394 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc2a642-41e4-4162-aa08-1cecd958b32c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.064560 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc2a642-41e4-4162-aa08-1cecd958b32c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.068781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.113363 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.116823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kh4\" (UniqueName: \"kubernetes.io/projected/edc2a642-41e4-4162-aa08-1cecd958b32c-kube-api-access-z5kh4\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.118783 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-scripts\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.125155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.170008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7cjg\" (UniqueName: \"kubernetes.io/projected/59401c77-eb6e-46f4-8b16-c57ac6f97f24-kube-api-access-j7cjg\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.170083 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.170117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-config\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.170142 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.170174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.170196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-svc\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.210962 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.212798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.217504 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.257354 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.272039 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7cjg\" (UniqueName: \"kubernetes.io/projected/59401c77-eb6e-46f4-8b16-c57ac6f97f24-kube-api-access-j7cjg\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.272128 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.272165 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-config\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.272209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.275657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.275694 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-svc\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.277151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-svc\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.278008 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.280392 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.280821 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.280611 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-config\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.285079 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.331499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7cjg\" (UniqueName: \"kubernetes.io/projected/59401c77-eb6e-46f4-8b16-c57ac6f97f24-kube-api-access-j7cjg\") pod \"dnsmasq-dns-6578955fd5-klbvt\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.341354 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-647b685f9-49zj6"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.388025 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.388139 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825g9\" (UniqueName: \"kubernetes.io/projected/90d02b67-bed1-4363-b9a0-e89a8733149b-kube-api-access-825g9\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.388162 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data-custom\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.388244 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d02b67-bed1-4363-b9a0-e89a8733149b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.388270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-scripts\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.388399 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d02b67-bed1-4363-b9a0-e89a8733149b-logs\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.388422 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.409813 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.489845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-825g9\" (UniqueName: \"kubernetes.io/projected/90d02b67-bed1-4363-b9a0-e89a8733149b-kube-api-access-825g9\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.489900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data-custom\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.489934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d02b67-bed1-4363-b9a0-e89a8733149b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.489971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-scripts\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.490087 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d02b67-bed1-4363-b9a0-e89a8733149b-logs\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.490122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.490165 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.490624 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d02b67-bed1-4363-b9a0-e89a8733149b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.495782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data-custom\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.496970 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-scripts\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.497597 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d02b67-bed1-4363-b9a0-e89a8733149b-logs\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.512245 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.513351 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.528675 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-825g9\" (UniqueName: \"kubernetes.io/projected/90d02b67-bed1-4363-b9a0-e89a8733149b-kube-api-access-825g9\") pod \"cinder-api-0\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.544684 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.565035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc467f664-6zfb4" event={"ID":"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa","Type":"ContainerStarted","Data":"4f6c92f19483e300995347185d9e8324fad1c1ce13368cce52ea9815196b3c31"} Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.568664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647b685f9-49zj6" event={"ID":"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9","Type":"ContainerStarted","Data":"d9cbe4652fa9fce6e87a1ac81e2f451cd8ee015c14f8ed077d68f29d15aaba63"} Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.596194 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"6bb9fd5acba776380a6fa3e3d00855cfc048bc467ccbd9a88cd7ca74eccbe67f"} Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.624565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6db8644655-m8sn6" event={"ID":"f568d082-7794-4f60-b78e-bff0b6b6356f","Type":"ContainerStarted","Data":"6aeba76985893656bdc03a439ecf082c0bfd2262d05533914563a56e058e564e"} Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.797399 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f76cb8bb6-g4zck"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.803445 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.811638 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.811826 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.820320 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f76cb8bb6-g4zck"] Jan 26 11:37:11 crc kubenswrapper[4867]: I0126 11:37:11.884291 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:11 crc kubenswrapper[4867]: W0126 11:37:11.985958 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc2a642_41e4_4162_aa08_1cecd958b32c.slice/crio-c05928559a0b52dea96c1efd15bc47781c1ac77e81636bc416d302ecf14afc35 WatchSource:0}: Error finding container c05928559a0b52dea96c1efd15bc47781c1ac77e81636bc416d302ecf14afc35: Status 404 returned error can't find the container with id c05928559a0b52dea96c1efd15bc47781c1ac77e81636bc416d302ecf14afc35 Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.001232 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-config-data-custom\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.001274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-public-tls-certs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.001323 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-config-data\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.001348 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-combined-ca-bundle\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.001376 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-logs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.001418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-internal-tls-certs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.003869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqmlp\" (UniqueName: \"kubernetes.io/projected/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-kube-api-access-gqmlp\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.038837 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-klbvt"] Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.105172 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-config-data-custom\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.105229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-public-tls-certs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.105276 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-config-data\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.105300 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-combined-ca-bundle\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.105329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-logs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.105375 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-internal-tls-certs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.105408 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqmlp\" (UniqueName: \"kubernetes.io/projected/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-kube-api-access-gqmlp\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.113419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-logs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.118627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-config-data-custom\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.119520 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-combined-ca-bundle\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.122548 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-internal-tls-certs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.123058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-public-tls-certs\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.128056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-config-data\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.158169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqmlp\" (UniqueName: \"kubernetes.io/projected/340554a1-e56a-4b1b-aff3-d0c0e1ac210d-kube-api-access-gqmlp\") pod \"barbican-api-7f76cb8bb6-g4zck\" (UID: \"340554a1-e56a-4b1b-aff3-d0c0e1ac210d\") " pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.207475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.284068 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:12 crc kubenswrapper[4867]: W0126 11:37:12.330358 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d02b67_bed1_4363_b9a0_e89a8733149b.slice/crio-ab462e1f969463bdefcbfc9781df2637c7bb65117875fa84254f3362fdd22c0c WatchSource:0}: Error finding container ab462e1f969463bdefcbfc9781df2637c7bb65117875fa84254f3362fdd22c0c: Status 404 returned error can't find the container with id ab462e1f969463bdefcbfc9781df2637c7bb65117875fa84254f3362fdd22c0c Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.692622 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edc2a642-41e4-4162-aa08-1cecd958b32c","Type":"ContainerStarted","Data":"c05928559a0b52dea96c1efd15bc47781c1ac77e81636bc416d302ecf14afc35"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.729518 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerStarted","Data":"4e6b9eacbbc4c8c4c89d557b0b32bc6cd5b66fe33a8f7cb7cc1d83ff1d513941"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.730436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.732485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd45cdb8b-tgbqw" event={"ID":"8434703b-0a5f-49f0-8877-2048d276f8ff","Type":"ContainerStarted","Data":"ce96f9c15a75e5b8d42b5d0560717b6c9a212fd14e65bc84d4d8478bdfaca849"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.733213 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.733251 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.734046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" event={"ID":"59401c77-eb6e-46f4-8b16-c57ac6f97f24","Type":"ContainerStarted","Data":"3dace83017e3661f381209d711b2739327ee0fb1738f262b973e9584f5cbe81b"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.740450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d02b67-bed1-4363-b9a0-e89a8733149b","Type":"ContainerStarted","Data":"ab462e1f969463bdefcbfc9781df2637c7bb65117875fa84254f3362fdd22c0c"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.771551 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" event={"ID":"595e878d-5361-4fef-81d8-ca3d00e79685","Type":"ContainerStarted","Data":"d2748f01d0a0ec9f57d511d4d81934e4ccde552b89edd778d0b9e536457f96e8"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.771719 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" podUID="595e878d-5361-4fef-81d8-ca3d00e79685" containerName="dnsmasq-dns" containerID="cri-o://d2748f01d0a0ec9f57d511d4d81934e4ccde552b89edd778d0b9e536457f96e8" gracePeriod=10 Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.772001 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.788095 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.222376014 podStartE2EDuration="10.788077843s" podCreationTimestamp="2026-01-26 11:37:02 +0000 UTC" firstStartedPulling="2026-01-26 11:37:03.233851468 +0000 UTC m=+1172.932426378" lastFinishedPulling="2026-01-26 11:37:10.799553297 +0000 UTC m=+1180.498128207" observedRunningTime="2026-01-26 11:37:12.78273096 +0000 UTC m=+1182.481305860" watchObservedRunningTime="2026-01-26 11:37:12.788077843 +0000 UTC m=+1182.486652753" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.824621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6db8644655-m8sn6" event={"ID":"f568d082-7794-4f60-b78e-bff0b6b6356f","Type":"ContainerStarted","Data":"5b1dd3f4712623882262461ccea1dca4892925acb02750a16bfd6b947193f265"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.855085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc467f664-6zfb4" event={"ID":"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa","Type":"ContainerStarted","Data":"e845a45e251d9a54a725547beeb75eb2ecc8c510a9f1dd145012f4ff427bd21b"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.856263 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.873591 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" podStartSLOduration=6.873553056 podStartE2EDuration="6.873553056s" podCreationTimestamp="2026-01-26 11:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:12.82614209 +0000 UTC m=+1182.524717000" watchObservedRunningTime="2026-01-26 11:37:12.873553056 +0000 UTC m=+1182.572127956" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.886712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647b685f9-49zj6" event={"ID":"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9","Type":"ContainerStarted","Data":"a5c9999f2f0bd62c5c4deea891d6e027e4a35f80a15295574ab71c8fd947405d"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.905897 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-fd45cdb8b-tgbqw" podStartSLOduration=8.905852278 podStartE2EDuration="8.905852278s" podCreationTimestamp="2026-01-26 11:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:12.860837866 +0000 UTC m=+1182.559412776" watchObservedRunningTime="2026-01-26 11:37:12.905852278 +0000 UTC m=+1182.604427178" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.922318 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6db8644655-m8sn6" podStartSLOduration=3.962272116 podStartE2EDuration="8.922295618s" podCreationTimestamp="2026-01-26 11:37:04 +0000 UTC" firstStartedPulling="2026-01-26 11:37:05.530642616 +0000 UTC m=+1175.229217526" lastFinishedPulling="2026-01-26 11:37:10.490666118 +0000 UTC m=+1180.189241028" observedRunningTime="2026-01-26 11:37:12.897653079 +0000 UTC m=+1182.596227999" watchObservedRunningTime="2026-01-26 11:37:12.922295618 +0000 UTC m=+1182.620870528" Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.930366 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" event={"ID":"9a534f97-8d45-4418-af77-5e19e2013a0b","Type":"ContainerStarted","Data":"8153a1cf206a6dc03743845af2afd5e480514b449f0b1265fe9e250d96b91167"} Jan 26 11:37:12 crc kubenswrapper[4867]: I0126 11:37:12.956269 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bc467f664-6zfb4" podStartSLOduration=6.949855894 podStartE2EDuration="6.949855894s" podCreationTimestamp="2026-01-26 11:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:12.930773074 +0000 UTC m=+1182.629347984" watchObservedRunningTime="2026-01-26 11:37:12.949855894 +0000 UTC m=+1182.648430814" Jan 26 11:37:13 crc kubenswrapper[4867]: I0126 11:37:13.003204 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f76cb8bb6-g4zck"] Jan 26 11:37:13 crc kubenswrapper[4867]: I0126 11:37:13.940100 4867 generic.go:334] "Generic (PLEG): container finished" podID="595e878d-5361-4fef-81d8-ca3d00e79685" containerID="d2748f01d0a0ec9f57d511d4d81934e4ccde552b89edd778d0b9e536457f96e8" exitCode=0 Jan 26 11:37:13 crc kubenswrapper[4867]: I0126 11:37:13.940177 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" event={"ID":"595e878d-5361-4fef-81d8-ca3d00e79685","Type":"ContainerDied","Data":"d2748f01d0a0ec9f57d511d4d81934e4ccde552b89edd778d0b9e536457f96e8"} Jan 26 11:37:13 crc kubenswrapper[4867]: I0126 11:37:13.943990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f76cb8bb6-g4zck" event={"ID":"340554a1-e56a-4b1b-aff3-d0c0e1ac210d","Type":"ContainerStarted","Data":"6ff31002fd28beeb18baea61509fade78d4ac723140e8b5efcc9ce0de9d63151"} Jan 26 11:37:14 crc kubenswrapper[4867]: I0126 11:37:14.973726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" event={"ID":"59401c77-eb6e-46f4-8b16-c57ac6f97f24","Type":"ContainerStarted","Data":"1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5"} Jan 26 11:37:14 crc kubenswrapper[4867]: I0126 11:37:14.996200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d02b67-bed1-4363-b9a0-e89a8733149b","Type":"ContainerStarted","Data":"5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b"} Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.039313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" event={"ID":"9a534f97-8d45-4418-af77-5e19e2013a0b","Type":"ContainerStarted","Data":"23ae25b54b58b9aff9f55c7675b32ff81aa47481be3a671c16c4d3db28278945"} Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.096498 4867 generic.go:334] "Generic (PLEG): container finished" podID="3de6837e-5965-48ce-9967-2d259829ad4a" containerID="2d5200b98116c0502f815fd7e1409fcdd257bdc9acd7c8160ea6187f2e4fe98d" exitCode=0 Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.097406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h7r88" event={"ID":"3de6837e-5965-48ce-9967-2d259829ad4a","Type":"ContainerDied","Data":"2d5200b98116c0502f815fd7e1409fcdd257bdc9acd7c8160ea6187f2e4fe98d"} Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.108735 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.137696 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f94776d6f-8b6q4" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.142436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.146613 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-547bc4f4d-xs5kd" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.251150 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.404935 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-config\") pod \"595e878d-5361-4fef-81d8-ca3d00e79685\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.405266 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-svc\") pod \"595e878d-5361-4fef-81d8-ca3d00e79685\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.405304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-nb\") pod \"595e878d-5361-4fef-81d8-ca3d00e79685\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.405335 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-sb\") pod \"595e878d-5361-4fef-81d8-ca3d00e79685\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.405366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-swift-storage-0\") pod \"595e878d-5361-4fef-81d8-ca3d00e79685\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.405495 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhf7w\" (UniqueName: \"kubernetes.io/projected/595e878d-5361-4fef-81d8-ca3d00e79685-kube-api-access-dhf7w\") pod \"595e878d-5361-4fef-81d8-ca3d00e79685\" (UID: \"595e878d-5361-4fef-81d8-ca3d00e79685\") " Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.413107 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595e878d-5361-4fef-81d8-ca3d00e79685-kube-api-access-dhf7w" (OuterVolumeSpecName: "kube-api-access-dhf7w") pod "595e878d-5361-4fef-81d8-ca3d00e79685" (UID: "595e878d-5361-4fef-81d8-ca3d00e79685"). InnerVolumeSpecName "kube-api-access-dhf7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.495138 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "595e878d-5361-4fef-81d8-ca3d00e79685" (UID: "595e878d-5361-4fef-81d8-ca3d00e79685"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.507830 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhf7w\" (UniqueName: \"kubernetes.io/projected/595e878d-5361-4fef-81d8-ca3d00e79685-kube-api-access-dhf7w\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.507858 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.538289 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "595e878d-5361-4fef-81d8-ca3d00e79685" (UID: "595e878d-5361-4fef-81d8-ca3d00e79685"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.540191 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "595e878d-5361-4fef-81d8-ca3d00e79685" (UID: "595e878d-5361-4fef-81d8-ca3d00e79685"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.552212 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "595e878d-5361-4fef-81d8-ca3d00e79685" (UID: "595e878d-5361-4fef-81d8-ca3d00e79685"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.553394 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-config" (OuterVolumeSpecName: "config") pod "595e878d-5361-4fef-81d8-ca3d00e79685" (UID: "595e878d-5361-4fef-81d8-ca3d00e79685"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.610760 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.610794 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.610803 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:15 crc kubenswrapper[4867]: I0126 11:37:15.610812 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595e878d-5361-4fef-81d8-ca3d00e79685-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.106703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647b685f9-49zj6" event={"ID":"ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9","Type":"ContainerStarted","Data":"190536ae288f05b6d535a98fb3b47e68a0d255d0ee116cd76e40477f479b8db4"} Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.107187 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.109447 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d02b67-bed1-4363-b9a0-e89a8733149b","Type":"ContainerStarted","Data":"5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd"} Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.109573 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api" containerID="cri-o://5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd" gracePeriod=30 Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.109575 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api-log" containerID="cri-o://5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b" gracePeriod=30 Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.109611 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.115206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f76cb8bb6-g4zck" event={"ID":"340554a1-e56a-4b1b-aff3-d0c0e1ac210d","Type":"ContainerStarted","Data":"4b9cb7034b1e136c7b8264dd9ae616f036de235ec6ee24bbe2888702011f2e05"} Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.119341 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" event={"ID":"595e878d-5361-4fef-81d8-ca3d00e79685","Type":"ContainerDied","Data":"efc9a943fd8049363f1868997c15edf0c4677d163fd6146b7ab782986babab89"} Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.119424 4867 scope.go:117] "RemoveContainer" containerID="d2748f01d0a0ec9f57d511d4d81934e4ccde552b89edd778d0b9e536457f96e8" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.119364 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-z76z5" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.124463 4867 generic.go:334] "Generic (PLEG): container finished" podID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerID="1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5" exitCode=0 Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.124506 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" event={"ID":"59401c77-eb6e-46f4-8b16-c57ac6f97f24","Type":"ContainerDied","Data":"1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5"} Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.139506 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-647b685f9-49zj6" podStartSLOduration=7.139483476 podStartE2EDuration="7.139483476s" podCreationTimestamp="2026-01-26 11:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:16.133390953 +0000 UTC m=+1185.831965873" watchObservedRunningTime="2026-01-26 11:37:16.139483476 +0000 UTC m=+1185.838058396" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.185774 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.185755082 podStartE2EDuration="5.185755082s" podCreationTimestamp="2026-01-26 11:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:16.157082156 +0000 UTC m=+1185.855657066" watchObservedRunningTime="2026-01-26 11:37:16.185755082 +0000 UTC m=+1185.884329992" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.211853 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5fc6c76976-2w9dm" podStartSLOduration=7.45045104 podStartE2EDuration="12.211834278s" podCreationTimestamp="2026-01-26 11:37:04 +0000 UTC" firstStartedPulling="2026-01-26 11:37:05.660262258 +0000 UTC m=+1175.358837168" lastFinishedPulling="2026-01-26 11:37:10.421645486 +0000 UTC m=+1180.120220406" observedRunningTime="2026-01-26 11:37:16.209541517 +0000 UTC m=+1185.908116427" watchObservedRunningTime="2026-01-26 11:37:16.211834278 +0000 UTC m=+1185.910409188" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.236999 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z76z5"] Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.250944 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 11:37:16 crc kubenswrapper[4867]: E0126 11:37:16.251466 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595e878d-5361-4fef-81d8-ca3d00e79685" containerName="dnsmasq-dns" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.251488 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="595e878d-5361-4fef-81d8-ca3d00e79685" containerName="dnsmasq-dns" Jan 26 11:37:16 crc kubenswrapper[4867]: E0126 11:37:16.251498 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595e878d-5361-4fef-81d8-ca3d00e79685" containerName="init" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.251504 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="595e878d-5361-4fef-81d8-ca3d00e79685" containerName="init" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.251673 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="595e878d-5361-4fef-81d8-ca3d00e79685" containerName="dnsmasq-dns" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.252386 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.257671 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-98gp5" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.257944 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.258171 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.280569 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-z76z5"] Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.327598 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdw9\" (UniqueName: \"kubernetes.io/projected/0dba3b09-195d-416a-b4af-7f252c8abd0d-kube-api-access-gtdw9\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.327970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0dba3b09-195d-416a-b4af-7f252c8abd0d-openstack-config\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.328060 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0dba3b09-195d-416a-b4af-7f252c8abd0d-openstack-config-secret\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.328095 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba3b09-195d-416a-b4af-7f252c8abd0d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.341405 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.351836 4867 scope.go:117] "RemoveContainer" containerID="2097fd13ffe9f9966d57c9a6338eaa404c2e44ff41a6f52ca34b2c4f71099a74" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.432176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtdw9\" (UniqueName: \"kubernetes.io/projected/0dba3b09-195d-416a-b4af-7f252c8abd0d-kube-api-access-gtdw9\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.432537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0dba3b09-195d-416a-b4af-7f252c8abd0d-openstack-config\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.432675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0dba3b09-195d-416a-b4af-7f252c8abd0d-openstack-config-secret\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.432978 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba3b09-195d-416a-b4af-7f252c8abd0d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.436403 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0dba3b09-195d-416a-b4af-7f252c8abd0d-openstack-config\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.437090 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba3b09-195d-416a-b4af-7f252c8abd0d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.444510 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0dba3b09-195d-416a-b4af-7f252c8abd0d-openstack-config-secret\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.463827 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtdw9\" (UniqueName: \"kubernetes.io/projected/0dba3b09-195d-416a-b4af-7f252c8abd0d-kube-api-access-gtdw9\") pod \"openstackclient\" (UID: \"0dba3b09-195d-416a-b4af-7f252c8abd0d\") " pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.510384 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.576621 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.607152 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595e878d-5361-4fef-81d8-ca3d00e79685" path="/var/lib/kubelet/pods/595e878d-5361-4fef-81d8-ca3d00e79685/volumes" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.635892 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h7r88" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.749305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3de6837e-5965-48ce-9967-2d259829ad4a-etc-podinfo\") pod \"3de6837e-5965-48ce-9967-2d259829ad4a\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.749610 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxjcr\" (UniqueName: \"kubernetes.io/projected/3de6837e-5965-48ce-9967-2d259829ad4a-kube-api-access-qxjcr\") pod \"3de6837e-5965-48ce-9967-2d259829ad4a\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.749670 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-combined-ca-bundle\") pod \"3de6837e-5965-48ce-9967-2d259829ad4a\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.749729 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-scripts\") pod \"3de6837e-5965-48ce-9967-2d259829ad4a\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.749773 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3de6837e-5965-48ce-9967-2d259829ad4a-config-data-merged\") pod \"3de6837e-5965-48ce-9967-2d259829ad4a\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.749790 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-config-data\") pod \"3de6837e-5965-48ce-9967-2d259829ad4a\" (UID: \"3de6837e-5965-48ce-9967-2d259829ad4a\") " Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.751604 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de6837e-5965-48ce-9967-2d259829ad4a-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "3de6837e-5965-48ce-9967-2d259829ad4a" (UID: "3de6837e-5965-48ce-9967-2d259829ad4a"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.771094 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de6837e-5965-48ce-9967-2d259829ad4a-kube-api-access-qxjcr" (OuterVolumeSpecName: "kube-api-access-qxjcr") pod "3de6837e-5965-48ce-9967-2d259829ad4a" (UID: "3de6837e-5965-48ce-9967-2d259829ad4a"). InnerVolumeSpecName "kube-api-access-qxjcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.772361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-scripts" (OuterVolumeSpecName: "scripts") pod "3de6837e-5965-48ce-9967-2d259829ad4a" (UID: "3de6837e-5965-48ce-9967-2d259829ad4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.780621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3de6837e-5965-48ce-9967-2d259829ad4a-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "3de6837e-5965-48ce-9967-2d259829ad4a" (UID: "3de6837e-5965-48ce-9967-2d259829ad4a"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.839432 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-config-data" (OuterVolumeSpecName: "config-data") pod "3de6837e-5965-48ce-9967-2d259829ad4a" (UID: "3de6837e-5965-48ce-9967-2d259829ad4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.853476 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.853503 4867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3de6837e-5965-48ce-9967-2d259829ad4a-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.853516 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxjcr\" (UniqueName: \"kubernetes.io/projected/3de6837e-5965-48ce-9967-2d259829ad4a-kube-api-access-qxjcr\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.853525 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.853536 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3de6837e-5965-48ce-9967-2d259829ad4a-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.878339 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3de6837e-5965-48ce-9967-2d259829ad4a" (UID: "3de6837e-5965-48ce-9967-2d259829ad4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:16 crc kubenswrapper[4867]: I0126 11:37:16.955800 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de6837e-5965-48ce-9967-2d259829ad4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.140653 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-h7r88" event={"ID":"3de6837e-5965-48ce-9967-2d259829ad4a","Type":"ContainerDied","Data":"d1f324fa938909ddba94d836aa478e85d2572d6ec06f08b022affb02748d74a7"} Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.140692 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f324fa938909ddba94d836aa478e85d2572d6ec06f08b022affb02748d74a7" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.140762 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-h7r88" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.162403 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" event={"ID":"59401c77-eb6e-46f4-8b16-c57ac6f97f24","Type":"ContainerStarted","Data":"8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5"} Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.163444 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.167026 4867 generic.go:334] "Generic (PLEG): container finished" podID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerID="5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b" exitCode=143 Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.167440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d02b67-bed1-4363-b9a0-e89a8733149b","Type":"ContainerDied","Data":"5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b"} Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.169283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f76cb8bb6-g4zck" event={"ID":"340554a1-e56a-4b1b-aff3-d0c0e1ac210d","Type":"ContainerStarted","Data":"8cd1badb3c659d50e2cb9bf80af6a37d770df7a1234b28e9c65e38ab0dfc4661"} Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.170254 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.170343 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.205424 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.270490 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f76cb8bb6-g4zck" podStartSLOduration=6.27047047 podStartE2EDuration="6.27047047s" podCreationTimestamp="2026-01-26 11:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:17.249647624 +0000 UTC m=+1186.948222534" watchObservedRunningTime="2026-01-26 11:37:17.27047047 +0000 UTC m=+1186.969045380" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.314044 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" podStartSLOduration=7.314024274 podStartE2EDuration="7.314024274s" podCreationTimestamp="2026-01-26 11:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:17.296418874 +0000 UTC m=+1186.994993794" watchObservedRunningTime="2026-01-26 11:37:17.314024274 +0000 UTC m=+1187.012599184" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.637285 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-blwcj"] Jan 26 11:37:17 crc kubenswrapper[4867]: E0126 11:37:17.637968 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de6837e-5965-48ce-9967-2d259829ad4a" containerName="init" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.637986 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de6837e-5965-48ce-9967-2d259829ad4a" containerName="init" Jan 26 11:37:17 crc kubenswrapper[4867]: E0126 11:37:17.638015 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de6837e-5965-48ce-9967-2d259829ad4a" containerName="ironic-db-sync" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.638022 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de6837e-5965-48ce-9967-2d259829ad4a" containerName="ironic-db-sync" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.638195 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de6837e-5965-48ce-9967-2d259829ad4a" containerName="ironic-db-sync" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.638829 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.652037 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-blwcj"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.701290 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-795fb7c76b-9ndwh"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.702555 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.705394 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-xdv7v" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.705651 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.721210 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-795fb7c76b-9ndwh"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.777977 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2167905-2856-4125-81fd-a2430fe558f9-combined-ca-bundle\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.778029 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqnb2\" (UniqueName: \"kubernetes.io/projected/a2167905-2856-4125-81fd-a2430fe558f9-kube-api-access-hqnb2\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.778070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5m5c\" (UniqueName: \"kubernetes.io/projected/7baafbd6-fc39-426c-8869-460ad4ff235f-kube-api-access-v5m5c\") pod \"ironic-inspector-db-create-blwcj\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.778142 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7baafbd6-fc39-426c-8869-460ad4ff235f-operator-scripts\") pod \"ironic-inspector-db-create-blwcj\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.778183 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2167905-2856-4125-81fd-a2430fe558f9-config\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.781344 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-6c39-account-create-update-thslq"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.782569 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.787531 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.804952 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-6c39-account-create-update-thslq"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.821180 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6dc6f6fb68-dx2nc"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.825483 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.831957 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.831979 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.832179 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.832304 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.841496 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6dc6f6fb68-dx2nc"] Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.879265 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9l22\" (UniqueName: \"kubernetes.io/projected/29a2ed9f-c519-444f-922f-4cebf5b3893e-kube-api-access-l9l22\") pod \"ironic-inspector-6c39-account-create-update-thslq\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.879312 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2167905-2856-4125-81fd-a2430fe558f9-combined-ca-bundle\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.879341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqnb2\" (UniqueName: \"kubernetes.io/projected/a2167905-2856-4125-81fd-a2430fe558f9-kube-api-access-hqnb2\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.879392 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5m5c\" (UniqueName: \"kubernetes.io/projected/7baafbd6-fc39-426c-8869-460ad4ff235f-kube-api-access-v5m5c\") pod \"ironic-inspector-db-create-blwcj\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.879571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7baafbd6-fc39-426c-8869-460ad4ff235f-operator-scripts\") pod \"ironic-inspector-db-create-blwcj\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.879688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2167905-2856-4125-81fd-a2430fe558f9-config\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.879722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a2ed9f-c519-444f-922f-4cebf5b3893e-operator-scripts\") pod \"ironic-inspector-6c39-account-create-update-thslq\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.880710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7baafbd6-fc39-426c-8869-460ad4ff235f-operator-scripts\") pod \"ironic-inspector-db-create-blwcj\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.888206 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2167905-2856-4125-81fd-a2430fe558f9-config\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.898406 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2167905-2856-4125-81fd-a2430fe558f9-combined-ca-bundle\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.900541 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5m5c\" (UniqueName: \"kubernetes.io/projected/7baafbd6-fc39-426c-8869-460ad4ff235f-kube-api-access-v5m5c\") pod \"ironic-inspector-db-create-blwcj\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.901307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqnb2\" (UniqueName: \"kubernetes.io/projected/a2167905-2856-4125-81fd-a2430fe558f9-kube-api-access-hqnb2\") pod \"ironic-neutron-agent-795fb7c76b-9ndwh\" (UID: \"a2167905-2856-4125-81fd-a2430fe558f9\") " pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.974653 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.981165 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-custom\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.981450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a2ed9f-c519-444f-922f-4cebf5b3893e-operator-scripts\") pod \"ironic-inspector-6c39-account-create-update-thslq\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.981582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9l22\" (UniqueName: \"kubernetes.io/projected/29a2ed9f-c519-444f-922f-4cebf5b3893e-kube-api-access-l9l22\") pod \"ironic-inspector-6c39-account-create-update-thslq\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.981729 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-merged\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.981836 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-combined-ca-bundle\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.981961 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfxtl\" (UniqueName: \"kubernetes.io/projected/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-kube-api-access-vfxtl\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.982095 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-etc-podinfo\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.982191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a2ed9f-c519-444f-922f-4cebf5b3893e-operator-scripts\") pod \"ironic-inspector-6c39-account-create-update-thslq\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.982196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-logs\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.982457 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-scripts\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:17 crc kubenswrapper[4867]: I0126 11:37:17.982588 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.002835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9l22\" (UniqueName: \"kubernetes.io/projected/29a2ed9f-c519-444f-922f-4cebf5b3893e-kube-api-access-l9l22\") pod \"ironic-inspector-6c39-account-create-update-thslq\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.027287 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084312 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-scripts\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084439 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-custom\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084505 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-merged\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084526 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-combined-ca-bundle\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084555 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfxtl\" (UniqueName: \"kubernetes.io/projected/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-kube-api-access-vfxtl\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-etc-podinfo\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-logs\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.084978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-logs\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.086046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-merged\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.102787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-combined-ca-bundle\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.103252 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-etc-podinfo\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.109619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.111715 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.112978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-custom\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.132987 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-scripts\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.133842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfxtl\" (UniqueName: \"kubernetes.io/projected/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-kube-api-access-vfxtl\") pod \"ironic-6dc6f6fb68-dx2nc\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.152745 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.211520 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0dba3b09-195d-416a-b4af-7f252c8abd0d","Type":"ContainerStarted","Data":"7ee710d689f37d7e9a396c5d2a3e3fea8a2738ac2833a2a7260ab1ff491be791"} Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.219823 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edc2a642-41e4-4162-aa08-1cecd958b32c","Type":"ContainerStarted","Data":"fd746aa8feaf83c5272fe2cb0448ed3ff38f2bef6118c79888bcfa4f05b2f688"} Jan 26 11:37:18 crc kubenswrapper[4867]: W0126 11:37:18.617405 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7baafbd6_fc39_426c_8869_460ad4ff235f.slice/crio-86527327c311b96bc240289908423df6376b3e19480399f2e43eac84e5ed95ee WatchSource:0}: Error finding container 86527327c311b96bc240289908423df6376b3e19480399f2e43eac84e5ed95ee: Status 404 returned error can't find the container with id 86527327c311b96bc240289908423df6376b3e19480399f2e43eac84e5ed95ee Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.622706 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-blwcj"] Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.666097 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.670651 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.702777 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.703109 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.708728 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815421 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815575 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-scripts\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815764 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfjn\" (UniqueName: \"kubernetes.io/projected/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-kube-api-access-cmfjn\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.815834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917353 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917533 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-scripts\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.917686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmfjn\" (UniqueName: \"kubernetes.io/projected/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-kube-api-access-cmfjn\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.918647 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.918754 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.942952 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.942989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.943300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.943891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-config-data\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.958656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmfjn\" (UniqueName: \"kubernetes.io/projected/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-kube-api-access-cmfjn\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.963928 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a985fff-3d59-40fa-9cae-fd0f2cc9de70-scripts\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:18 crc kubenswrapper[4867]: I0126 11:37:18.989951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ironic-conductor-0\" (UID: \"1a985fff-3d59-40fa-9cae-fd0f2cc9de70\") " pod="openstack/ironic-conductor-0" Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.042069 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.053623 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-795fb7c76b-9ndwh"] Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.064094 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-6c39-account-create-update-thslq"] Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.133566 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6dc6f6fb68-dx2nc"] Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.185648 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.290982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-blwcj" event={"ID":"7baafbd6-fc39-426c-8869-460ad4ff235f","Type":"ContainerStarted","Data":"6f0784e823858add447f521977cff127528ceb154c6b2e3482025571f19e0e8c"} Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.291323 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-blwcj" event={"ID":"7baafbd6-fc39-426c-8869-460ad4ff235f","Type":"ContainerStarted","Data":"86527327c311b96bc240289908423df6376b3e19480399f2e43eac84e5ed95ee"} Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.297717 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerStarted","Data":"79ec0e03da860e28c17203449a672f3b76242d5a8c9a15cbb4fafd5ce38be6ce"} Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.311400 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-6c39-account-create-update-thslq" event={"ID":"29a2ed9f-c519-444f-922f-4cebf5b3893e","Type":"ContainerStarted","Data":"680d8a43bd7eb5b8f5fbc7d7320d77bd18befcebfb45bb4770dfbbb07a5c79c3"} Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.317593 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-blwcj" podStartSLOduration=2.31757391 podStartE2EDuration="2.31757391s" podCreationTimestamp="2026-01-26 11:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:19.312562496 +0000 UTC m=+1189.011137406" watchObservedRunningTime="2026-01-26 11:37:19.31757391 +0000 UTC m=+1189.016148820" Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.324907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerStarted","Data":"735629ff0404cb3127baf6562ffdc1d34de7363187574c9f2e1c71b3db08824d"} Jan 26 11:37:19 crc kubenswrapper[4867]: I0126 11:37:19.951445 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.336931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edc2a642-41e4-4162-aa08-1cecd958b32c","Type":"ContainerStarted","Data":"8201ba714254e8a26cfc2df5bb7a49512417f99d3cc37e33c15153214f82b6e1"} Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.338992 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerStarted","Data":"a27c81a229ef484ca2b3f8dea5ddba9e1dc9b37491e0dde33c2367c1d31437b1"} Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.352308 4867 generic.go:334] "Generic (PLEG): container finished" podID="29a2ed9f-c519-444f-922f-4cebf5b3893e" containerID="96193fd60d2ddebc9095a2f0963bc61584546f0b1c70d3dbd3877b31fa00842c" exitCode=0 Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.352380 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-6c39-account-create-update-thslq" event={"ID":"29a2ed9f-c519-444f-922f-4cebf5b3893e","Type":"ContainerDied","Data":"96193fd60d2ddebc9095a2f0963bc61584546f0b1c70d3dbd3877b31fa00842c"} Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.365609 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.7985555810000005 podStartE2EDuration="10.365587978s" podCreationTimestamp="2026-01-26 11:37:10 +0000 UTC" firstStartedPulling="2026-01-26 11:37:12.009037778 +0000 UTC m=+1181.707612688" lastFinishedPulling="2026-01-26 11:37:16.576070175 +0000 UTC m=+1186.274645085" observedRunningTime="2026-01-26 11:37:20.356344242 +0000 UTC m=+1190.054919172" watchObservedRunningTime="2026-01-26 11:37:20.365587978 +0000 UTC m=+1190.064162888" Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.374857 4867 generic.go:334] "Generic (PLEG): container finished" podID="7baafbd6-fc39-426c-8869-460ad4ff235f" containerID="6f0784e823858add447f521977cff127528ceb154c6b2e3482025571f19e0e8c" exitCode=0 Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.374925 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-blwcj" event={"ID":"7baafbd6-fc39-426c-8869-460ad4ff235f","Type":"ContainerDied","Data":"6f0784e823858add447f521977cff127528ceb154c6b2e3482025571f19e0e8c"} Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.932514 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5f459cfdcb-t5qhs"] Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.944256 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.957394 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.957588 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Jan 26 11:37:20 crc kubenswrapper[4867]: I0126 11:37:20.971206 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5f459cfdcb-t5qhs"] Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092512 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-combined-ca-bundle\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data-merged\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-scripts\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f114731c-0ed9-4d58-90f0-b670a856adf0-etc-podinfo\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092678 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data-custom\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092702 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6mkm\" (UniqueName: \"kubernetes.io/projected/f114731c-0ed9-4d58-90f0-b670a856adf0-kube-api-access-s6mkm\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-internal-tls-certs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f114731c-0ed9-4d58-90f0-b670a856adf0-logs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.092774 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-public-tls-certs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.193905 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.194511 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-combined-ca-bundle\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.194604 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data-merged\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.194737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-scripts\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.194879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f114731c-0ed9-4d58-90f0-b670a856adf0-etc-podinfo\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.194964 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data-custom\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.195050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6mkm\" (UniqueName: \"kubernetes.io/projected/f114731c-0ed9-4d58-90f0-b670a856adf0-kube-api-access-s6mkm\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.195140 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-internal-tls-certs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.195250 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f114731c-0ed9-4d58-90f0-b670a856adf0-logs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.195334 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-public-tls-certs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.195051 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data-merged\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.195753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f114731c-0ed9-4d58-90f0-b670a856adf0-logs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.200373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f114731c-0ed9-4d58-90f0-b670a856adf0-etc-podinfo\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.201123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-scripts\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.203918 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data-custom\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.203951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-public-tls-certs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.204307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-combined-ca-bundle\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.204432 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-config-data\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.204475 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f114731c-0ed9-4d58-90f0-b670a856adf0-internal-tls-certs\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.211469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6mkm\" (UniqueName: \"kubernetes.io/projected/f114731c-0ed9-4d58-90f0-b670a856adf0-kube-api-access-s6mkm\") pod \"ironic-5f459cfdcb-t5qhs\" (UID: \"f114731c-0ed9-4d58-90f0-b670a856adf0\") " pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.286675 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.314754 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.415468 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.424934 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerStarted","Data":"3f8f4812ac509be4ab8b6891fb5e018ec66ee0101f14b18b13a05044a8831db3"} Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.525639 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-m89kb"] Jan 26 11:37:21 crc kubenswrapper[4867]: I0126 11:37:21.529539 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="dnsmasq-dns" containerID="cri-o://29e32d6b11200c281e17114562769683762032efb9c74d667e3d5716b6829560" gracePeriod=10 Jan 26 11:37:22 crc kubenswrapper[4867]: I0126 11:37:22.114353 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5f459cfdcb-t5qhs"] Jan 26 11:37:22 crc kubenswrapper[4867]: I0126 11:37:22.448413 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Jan 26 11:37:23 crc kubenswrapper[4867]: W0126 11:37:23.320250 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf114731c_0ed9_4d58_90f0_b670a856adf0.slice/crio-2d276282bc61f3f4eac5b6091b372f86dc18e8d4fb9ed9eaa0de31d486880be5 WatchSource:0}: Error finding container 2d276282bc61f3f4eac5b6091b372f86dc18e8d4fb9ed9eaa0de31d486880be5: Status 404 returned error can't find the container with id 2d276282bc61f3f4eac5b6091b372f86dc18e8d4fb9ed9eaa0de31d486880be5 Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.388825 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.449304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-blwcj" event={"ID":"7baafbd6-fc39-426c-8869-460ad4ff235f","Type":"ContainerDied","Data":"86527327c311b96bc240289908423df6376b3e19480399f2e43eac84e5ed95ee"} Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.449340 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86527327c311b96bc240289908423df6376b3e19480399f2e43eac84e5ed95ee" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.449407 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-blwcj" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.467286 4867 generic.go:334] "Generic (PLEG): container finished" podID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerID="29e32d6b11200c281e17114562769683762032efb9c74d667e3d5716b6829560" exitCode=0 Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.467370 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" event={"ID":"227ae5b6-e7d6-45ce-b333-3dd508d56b35","Type":"ContainerDied","Data":"29e32d6b11200c281e17114562769683762032efb9c74d667e3d5716b6829560"} Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.478951 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f459cfdcb-t5qhs" event={"ID":"f114731c-0ed9-4d58-90f0-b670a856adf0","Type":"ContainerStarted","Data":"2d276282bc61f3f4eac5b6091b372f86dc18e8d4fb9ed9eaa0de31d486880be5"} Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.575935 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5m5c\" (UniqueName: \"kubernetes.io/projected/7baafbd6-fc39-426c-8869-460ad4ff235f-kube-api-access-v5m5c\") pod \"7baafbd6-fc39-426c-8869-460ad4ff235f\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.576045 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7baafbd6-fc39-426c-8869-460ad4ff235f-operator-scripts\") pod \"7baafbd6-fc39-426c-8869-460ad4ff235f\" (UID: \"7baafbd6-fc39-426c-8869-460ad4ff235f\") " Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.579158 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7baafbd6-fc39-426c-8869-460ad4ff235f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7baafbd6-fc39-426c-8869-460ad4ff235f" (UID: "7baafbd6-fc39-426c-8869-460ad4ff235f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.585468 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7baafbd6-fc39-426c-8869-460ad4ff235f-kube-api-access-v5m5c" (OuterVolumeSpecName: "kube-api-access-v5m5c") pod "7baafbd6-fc39-426c-8869-460ad4ff235f" (UID: "7baafbd6-fc39-426c-8869-460ad4ff235f"). InnerVolumeSpecName "kube-api-access-v5m5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.679335 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5m5c\" (UniqueName: \"kubernetes.io/projected/7baafbd6-fc39-426c-8869-460ad4ff235f-kube-api-access-v5m5c\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.679609 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7baafbd6-fc39-426c-8869-460ad4ff235f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.702763 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.766849 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:23 crc kubenswrapper[4867]: I0126 11:37:23.954535 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f76cb8bb6-g4zck" Jan 26 11:37:24 crc kubenswrapper[4867]: I0126 11:37:24.060729 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-fd45cdb8b-tgbqw"] Jan 26 11:37:24 crc kubenswrapper[4867]: I0126 11:37:24.060936 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-fd45cdb8b-tgbqw" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api-log" containerID="cri-o://27d71e31cf8c65c6dbc4a31b200b8d165237f01a1f12d8b6878de8c1143c58a3" gracePeriod=30 Jan 26 11:37:24 crc kubenswrapper[4867]: I0126 11:37:24.061357 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-fd45cdb8b-tgbqw" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api" containerID="cri-o://ce96f9c15a75e5b8d42b5d0560717b6c9a212fd14e65bc84d4d8478bdfaca849" gracePeriod=30 Jan 26 11:37:24 crc kubenswrapper[4867]: I0126 11:37:24.493508 4867 generic.go:334] "Generic (PLEG): container finished" podID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerID="27d71e31cf8c65c6dbc4a31b200b8d165237f01a1f12d8b6878de8c1143c58a3" exitCode=143 Jan 26 11:37:24 crc kubenswrapper[4867]: I0126 11:37:24.493636 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd45cdb8b-tgbqw" event={"ID":"8434703b-0a5f-49f0-8877-2048d276f8ff","Type":"ContainerDied","Data":"27d71e31cf8c65c6dbc4a31b200b8d165237f01a1f12d8b6878de8c1143c58a3"} Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.028123 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5668f68b6c-7674j"] Jan 26 11:37:26 crc kubenswrapper[4867]: E0126 11:37:26.028924 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7baafbd6-fc39-426c-8869-460ad4ff235f" containerName="mariadb-database-create" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.028940 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7baafbd6-fc39-426c-8869-460ad4ff235f" containerName="mariadb-database-create" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.029123 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7baafbd6-fc39-426c-8869-460ad4ff235f" containerName="mariadb-database-create" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.030197 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.033178 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.033289 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.033295 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.055013 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5668f68b6c-7674j"] Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.126637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8249n\" (UniqueName: \"kubernetes.io/projected/39829bfc-df9a-4123-a069-f99e3032615d-kube-api-access-8249n\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.126745 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39829bfc-df9a-4123-a069-f99e3032615d-run-httpd\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.126833 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-config-data\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.126858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39829bfc-df9a-4123-a069-f99e3032615d-etc-swift\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.126883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39829bfc-df9a-4123-a069-f99e3032615d-log-httpd\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.126904 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-public-tls-certs\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.127028 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-combined-ca-bundle\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.127090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-internal-tls-certs\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8249n\" (UniqueName: \"kubernetes.io/projected/39829bfc-df9a-4123-a069-f99e3032615d-kube-api-access-8249n\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228463 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39829bfc-df9a-4123-a069-f99e3032615d-run-httpd\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228504 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-config-data\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228520 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39829bfc-df9a-4123-a069-f99e3032615d-etc-swift\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228538 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39829bfc-df9a-4123-a069-f99e3032615d-log-httpd\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228559 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-public-tls-certs\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-combined-ca-bundle\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.228623 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-internal-tls-certs\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.229303 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39829bfc-df9a-4123-a069-f99e3032615d-log-httpd\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.235059 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39829bfc-df9a-4123-a069-f99e3032615d-run-httpd\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.237302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-config-data\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.238004 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-internal-tls-certs\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.240297 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-combined-ca-bundle\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.243304 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39829bfc-df9a-4123-a069-f99e3032615d-public-tls-certs\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.247558 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8249n\" (UniqueName: \"kubernetes.io/projected/39829bfc-df9a-4123-a069-f99e3032615d-kube-api-access-8249n\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.251285 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39829bfc-df9a-4123-a069-f99e3032615d-etc-swift\") pod \"swift-proxy-5668f68b6c-7674j\" (UID: \"39829bfc-df9a-4123-a069-f99e3032615d\") " pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.351365 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.514459 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 11:37:26 crc kubenswrapper[4867]: I0126 11:37:26.551054 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.005156 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.005439 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-central-agent" containerID="cri-o://3176feeb6409c9c175520fd5f96b008015c911ba0fb09f7821c6dc2f0fc7ca48" gracePeriod=30 Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.005546 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="proxy-httpd" containerID="cri-o://4e6b9eacbbc4c8c4c89d557b0b32bc6cd5b66fe33a8f7cb7cc1d83ff1d513941" gracePeriod=30 Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.005585 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="sg-core" containerID="cri-o://dd823c865671eed3b9056413ff43d7b65162230282ac78a467be7e9cfae7dccb" gracePeriod=30 Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.005721 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-notification-agent" containerID="cri-o://f0c7e3e53676b600a0f8387db3e43d43aaf869729a99557deb943b08f2fdfd33" gracePeriod=30 Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.017859 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.218177 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd45cdb8b-tgbqw" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:37742->10.217.0.159:9311: read: connection reset by peer" Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.218584 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd45cdb8b-tgbqw" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:37738->10.217.0.159:9311: read: connection reset by peer" Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.516393 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="cinder-scheduler" containerID="cri-o://fd746aa8feaf83c5272fe2cb0448ed3ff38f2bef6118c79888bcfa4f05b2f688" gracePeriod=30 Jan 26 11:37:27 crc kubenswrapper[4867]: I0126 11:37:27.516406 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="probe" containerID="cri-o://8201ba714254e8a26cfc2df5bb7a49512417f99d3cc37e33c15153214f82b6e1" gracePeriod=30 Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.101854 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.142720 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-sb\") pod \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181426 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-svc\") pod \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9l22\" (UniqueName: \"kubernetes.io/projected/29a2ed9f-c519-444f-922f-4cebf5b3893e-kube-api-access-l9l22\") pod \"29a2ed9f-c519-444f-922f-4cebf5b3893e\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181683 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69mrq\" (UniqueName: \"kubernetes.io/projected/227ae5b6-e7d6-45ce-b333-3dd508d56b35-kube-api-access-69mrq\") pod \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181751 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-swift-storage-0\") pod \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181798 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-config\") pod \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a2ed9f-c519-444f-922f-4cebf5b3893e-operator-scripts\") pod \"29a2ed9f-c519-444f-922f-4cebf5b3893e\" (UID: \"29a2ed9f-c519-444f-922f-4cebf5b3893e\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.181867 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-nb\") pod \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\" (UID: \"227ae5b6-e7d6-45ce-b333-3dd508d56b35\") " Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.184917 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a2ed9f-c519-444f-922f-4cebf5b3893e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29a2ed9f-c519-444f-922f-4cebf5b3893e" (UID: "29a2ed9f-c519-444f-922f-4cebf5b3893e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.200537 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227ae5b6-e7d6-45ce-b333-3dd508d56b35-kube-api-access-69mrq" (OuterVolumeSpecName: "kube-api-access-69mrq") pod "227ae5b6-e7d6-45ce-b333-3dd508d56b35" (UID: "227ae5b6-e7d6-45ce-b333-3dd508d56b35"). InnerVolumeSpecName "kube-api-access-69mrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.204426 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a2ed9f-c519-444f-922f-4cebf5b3893e-kube-api-access-l9l22" (OuterVolumeSpecName: "kube-api-access-l9l22") pod "29a2ed9f-c519-444f-922f-4cebf5b3893e" (UID: "29a2ed9f-c519-444f-922f-4cebf5b3893e"). InnerVolumeSpecName "kube-api-access-l9l22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.253050 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "227ae5b6-e7d6-45ce-b333-3dd508d56b35" (UID: "227ae5b6-e7d6-45ce-b333-3dd508d56b35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.279974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "227ae5b6-e7d6-45ce-b333-3dd508d56b35" (UID: "227ae5b6-e7d6-45ce-b333-3dd508d56b35"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.289300 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a2ed9f-c519-444f-922f-4cebf5b3893e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.289333 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.289343 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9l22\" (UniqueName: \"kubernetes.io/projected/29a2ed9f-c519-444f-922f-4cebf5b3893e-kube-api-access-l9l22\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.289355 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69mrq\" (UniqueName: \"kubernetes.io/projected/227ae5b6-e7d6-45ce-b333-3dd508d56b35-kube-api-access-69mrq\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.289364 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.294370 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "227ae5b6-e7d6-45ce-b333-3dd508d56b35" (UID: "227ae5b6-e7d6-45ce-b333-3dd508d56b35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.299584 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "227ae5b6-e7d6-45ce-b333-3dd508d56b35" (UID: "227ae5b6-e7d6-45ce-b333-3dd508d56b35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.345062 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-config" (OuterVolumeSpecName: "config") pod "227ae5b6-e7d6-45ce-b333-3dd508d56b35" (UID: "227ae5b6-e7d6-45ce-b333-3dd508d56b35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.390774 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.390814 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.390824 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/227ae5b6-e7d6-45ce-b333-3dd508d56b35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.527638 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a985fff-3d59-40fa-9cae-fd0f2cc9de70" containerID="3f8f4812ac509be4ab8b6891fb5e018ec66ee0101f14b18b13a05044a8831db3" exitCode=0 Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.527738 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerDied","Data":"3f8f4812ac509be4ab8b6891fb5e018ec66ee0101f14b18b13a05044a8831db3"} Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.533303 4867 generic.go:334] "Generic (PLEG): container finished" podID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerID="4e6b9eacbbc4c8c4c89d557b0b32bc6cd5b66fe33a8f7cb7cc1d83ff1d513941" exitCode=0 Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.533327 4867 generic.go:334] "Generic (PLEG): container finished" podID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerID="dd823c865671eed3b9056413ff43d7b65162230282ac78a467be7e9cfae7dccb" exitCode=2 Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.533334 4867 generic.go:334] "Generic (PLEG): container finished" podID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerID="3176feeb6409c9c175520fd5f96b008015c911ba0fb09f7821c6dc2f0fc7ca48" exitCode=0 Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.533374 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerDied","Data":"4e6b9eacbbc4c8c4c89d557b0b32bc6cd5b66fe33a8f7cb7cc1d83ff1d513941"} Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.533398 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerDied","Data":"dd823c865671eed3b9056413ff43d7b65162230282ac78a467be7e9cfae7dccb"} Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.533408 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerDied","Data":"3176feeb6409c9c175520fd5f96b008015c911ba0fb09f7821c6dc2f0fc7ca48"} Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.537454 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-6c39-account-create-update-thslq" event={"ID":"29a2ed9f-c519-444f-922f-4cebf5b3893e","Type":"ContainerDied","Data":"680d8a43bd7eb5b8f5fbc7d7320d77bd18befcebfb45bb4770dfbbb07a5c79c3"} Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.537493 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="680d8a43bd7eb5b8f5fbc7d7320d77bd18befcebfb45bb4770dfbbb07a5c79c3" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.537502 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6c39-account-create-update-thslq" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.542421 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.542412 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" event={"ID":"227ae5b6-e7d6-45ce-b333-3dd508d56b35","Type":"ContainerDied","Data":"db5a53e4d6d3a31585951729dbce0a90f5d5d16a246c215ec38f4b3b6442304c"} Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.542548 4867 scope.go:117] "RemoveContainer" containerID="29e32d6b11200c281e17114562769683762032efb9c74d667e3d5716b6829560" Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.544900 4867 generic.go:334] "Generic (PLEG): container finished" podID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerID="ce96f9c15a75e5b8d42b5d0560717b6c9a212fd14e65bc84d4d8478bdfaca849" exitCode=0 Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.545125 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd45cdb8b-tgbqw" event={"ID":"8434703b-0a5f-49f0-8877-2048d276f8ff","Type":"ContainerDied","Data":"ce96f9c15a75e5b8d42b5d0560717b6c9a212fd14e65bc84d4d8478bdfaca849"} Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.603350 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-m89kb"] Jan 26 11:37:28 crc kubenswrapper[4867]: I0126 11:37:28.612301 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-m89kb"] Jan 26 11:37:29 crc kubenswrapper[4867]: I0126 11:37:29.562016 4867 generic.go:334] "Generic (PLEG): container finished" podID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerID="8201ba714254e8a26cfc2df5bb7a49512417f99d3cc37e33c15153214f82b6e1" exitCode=0 Jan 26 11:37:29 crc kubenswrapper[4867]: I0126 11:37:29.562312 4867 generic.go:334] "Generic (PLEG): container finished" podID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerID="fd746aa8feaf83c5272fe2cb0448ed3ff38f2bef6118c79888bcfa4f05b2f688" exitCode=0 Jan 26 11:37:29 crc kubenswrapper[4867]: I0126 11:37:29.562332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edc2a642-41e4-4162-aa08-1cecd958b32c","Type":"ContainerDied","Data":"8201ba714254e8a26cfc2df5bb7a49512417f99d3cc37e33c15153214f82b6e1"} Jan 26 11:37:29 crc kubenswrapper[4867]: I0126 11:37:29.562357 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edc2a642-41e4-4162-aa08-1cecd958b32c","Type":"ContainerDied","Data":"fd746aa8feaf83c5272fe2cb0448ed3ff38f2bef6118c79888bcfa4f05b2f688"} Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.160826 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.161093 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-log" containerID="cri-o://a3dfe0d4d3358e53f49d7ad7fd7ccc7dfe0122a9ec36359a54e66b3f1225b275" gracePeriod=30 Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.161615 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-httpd" containerID="cri-o://1ae14c522c74fcc84e09c808640dccc7ff8db79b5fdfc21ea954926cf317bc83" gracePeriod=30 Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.577422 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" path="/var/lib/kubelet/pods/227ae5b6-e7d6-45ce-b333-3dd508d56b35/volumes" Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.590631 4867 generic.go:334] "Generic (PLEG): container finished" podID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerID="f0c7e3e53676b600a0f8387db3e43d43aaf869729a99557deb943b08f2fdfd33" exitCode=0 Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.590722 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerDied","Data":"f0c7e3e53676b600a0f8387db3e43d43aaf869729a99557deb943b08f2fdfd33"} Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.597676 4867 generic.go:334] "Generic (PLEG): container finished" podID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerID="a3dfe0d4d3358e53f49d7ad7fd7ccc7dfe0122a9ec36359a54e66b3f1225b275" exitCode=143 Jan 26 11:37:30 crc kubenswrapper[4867]: I0126 11:37:30.597715 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"195fa02f-5887-4d8e-a103-2261e65a9c96","Type":"ContainerDied","Data":"a3dfe0d4d3358e53f49d7ad7fd7ccc7dfe0122a9ec36359a54e66b3f1225b275"} Jan 26 11:37:31 crc kubenswrapper[4867]: I0126 11:37:31.838340 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:37:31 crc kubenswrapper[4867]: I0126 11:37:31.838837 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-log" containerID="cri-o://9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a" gracePeriod=30 Jan 26 11:37:31 crc kubenswrapper[4867]: I0126 11:37:31.839187 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-httpd" containerID="cri-o://55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd" gracePeriod=30 Jan 26 11:37:32 crc kubenswrapper[4867]: I0126 11:37:32.449056 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-m89kb" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: i/o timeout" Jan 26 11:37:32 crc kubenswrapper[4867]: I0126 11:37:32.624943 4867 generic.go:334] "Generic (PLEG): container finished" podID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerID="9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a" exitCode=143 Jan 26 11:37:32 crc kubenswrapper[4867]: I0126 11:37:32.625177 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dfc017b6-886f-48d3-8f1e-cef59e587503","Type":"ContainerDied","Data":"9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a"} Jan 26 11:37:32 crc kubenswrapper[4867]: I0126 11:37:32.722649 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.155:3000/\": dial tcp 10.217.0.155:3000: connect: connection refused" Jan 26 11:37:33 crc kubenswrapper[4867]: I0126 11:37:33.639709 4867 generic.go:334] "Generic (PLEG): container finished" podID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerID="1ae14c522c74fcc84e09c808640dccc7ff8db79b5fdfc21ea954926cf317bc83" exitCode=0 Jan 26 11:37:33 crc kubenswrapper[4867]: I0126 11:37:33.639778 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"195fa02f-5887-4d8e-a103-2261e65a9c96","Type":"ContainerDied","Data":"1ae14c522c74fcc84e09c808640dccc7ff8db79b5fdfc21ea954926cf317bc83"} Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.248139 4867 scope.go:117] "RemoveContainer" containerID="d1a32962ff800086c7304e4d7adc1f221caccda3a92d99dbaf7aaa13a5eed3bc" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.272861 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.415579 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-combined-ca-bundle\") pod \"8434703b-0a5f-49f0-8877-2048d276f8ff\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.416434 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data-custom\") pod \"8434703b-0a5f-49f0-8877-2048d276f8ff\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.416518 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data\") pod \"8434703b-0a5f-49f0-8877-2048d276f8ff\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.416617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8434703b-0a5f-49f0-8877-2048d276f8ff-logs\") pod \"8434703b-0a5f-49f0-8877-2048d276f8ff\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.416707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgp8m\" (UniqueName: \"kubernetes.io/projected/8434703b-0a5f-49f0-8877-2048d276f8ff-kube-api-access-mgp8m\") pod \"8434703b-0a5f-49f0-8877-2048d276f8ff\" (UID: \"8434703b-0a5f-49f0-8877-2048d276f8ff\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.417746 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8434703b-0a5f-49f0-8877-2048d276f8ff-logs" (OuterVolumeSpecName: "logs") pod "8434703b-0a5f-49f0-8877-2048d276f8ff" (UID: "8434703b-0a5f-49f0-8877-2048d276f8ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.422023 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8434703b-0a5f-49f0-8877-2048d276f8ff" (UID: "8434703b-0a5f-49f0-8877-2048d276f8ff"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.422908 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8434703b-0a5f-49f0-8877-2048d276f8ff-kube-api-access-mgp8m" (OuterVolumeSpecName: "kube-api-access-mgp8m") pod "8434703b-0a5f-49f0-8877-2048d276f8ff" (UID: "8434703b-0a5f-49f0-8877-2048d276f8ff"). InnerVolumeSpecName "kube-api-access-mgp8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.469159 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8434703b-0a5f-49f0-8877-2048d276f8ff" (UID: "8434703b-0a5f-49f0-8877-2048d276f8ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.508801 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data" (OuterVolumeSpecName: "config-data") pod "8434703b-0a5f-49f0-8877-2048d276f8ff" (UID: "8434703b-0a5f-49f0-8877-2048d276f8ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.519670 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.519709 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.519745 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8434703b-0a5f-49f0-8877-2048d276f8ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.519759 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8434703b-0a5f-49f0-8877-2048d276f8ff-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.519770 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgp8m\" (UniqueName: \"kubernetes.io/projected/8434703b-0a5f-49f0-8877-2048d276f8ff-kube-api-access-mgp8m\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.678597 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerStarted","Data":"1a4d115ab295e9eb8edfb2102fe14586cbb812f3d1c01ec1525e6027a548e3ec"} Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.679829 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.697023 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podStartSLOduration=2.6793386310000002 podStartE2EDuration="17.697009394s" podCreationTimestamp="2026-01-26 11:37:17 +0000 UTC" firstStartedPulling="2026-01-26 11:37:19.101429028 +0000 UTC m=+1188.800003938" lastFinishedPulling="2026-01-26 11:37:34.119099791 +0000 UTC m=+1203.817674701" observedRunningTime="2026-01-26 11:37:34.695581507 +0000 UTC m=+1204.394156417" watchObservedRunningTime="2026-01-26 11:37:34.697009394 +0000 UTC m=+1204.395584304" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.719613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd45cdb8b-tgbqw" event={"ID":"8434703b-0a5f-49f0-8877-2048d276f8ff","Type":"ContainerDied","Data":"8215f1850fd8ddecdea01cb82cafbf02d74125cdac8c207406e77d22e4e62156"} Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.719669 4867 scope.go:117] "RemoveContainer" containerID="ce96f9c15a75e5b8d42b5d0560717b6c9a212fd14e65bc84d4d8478bdfaca849" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.719792 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd45cdb8b-tgbqw" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.741074 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.760587 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-fd45cdb8b-tgbqw"] Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.769650 4867 scope.go:117] "RemoveContainer" containerID="27d71e31cf8c65c6dbc4a31b200b8d165237f01a1f12d8b6878de8c1143c58a3" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.781657 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-fd45cdb8b-tgbqw"] Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.824769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-scripts\") pod \"b2643e95-59cb-42a2-982e-96a7d732e5e4\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.824821 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-combined-ca-bundle\") pod \"b2643e95-59cb-42a2-982e-96a7d732e5e4\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.824844 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-sg-core-conf-yaml\") pod \"b2643e95-59cb-42a2-982e-96a7d732e5e4\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.824863 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-log-httpd\") pod \"b2643e95-59cb-42a2-982e-96a7d732e5e4\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.824896 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-run-httpd\") pod \"b2643e95-59cb-42a2-982e-96a7d732e5e4\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.824957 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-config-data\") pod \"b2643e95-59cb-42a2-982e-96a7d732e5e4\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.824996 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kknc6\" (UniqueName: \"kubernetes.io/projected/b2643e95-59cb-42a2-982e-96a7d732e5e4-kube-api-access-kknc6\") pod \"b2643e95-59cb-42a2-982e-96a7d732e5e4\" (UID: \"b2643e95-59cb-42a2-982e-96a7d732e5e4\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.831749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2643e95-59cb-42a2-982e-96a7d732e5e4" (UID: "b2643e95-59cb-42a2-982e-96a7d732e5e4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.832008 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2643e95-59cb-42a2-982e-96a7d732e5e4" (UID: "b2643e95-59cb-42a2-982e-96a7d732e5e4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.832557 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-scripts" (OuterVolumeSpecName: "scripts") pod "b2643e95-59cb-42a2-982e-96a7d732e5e4" (UID: "b2643e95-59cb-42a2-982e-96a7d732e5e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.836976 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2643e95-59cb-42a2-982e-96a7d732e5e4-kube-api-access-kknc6" (OuterVolumeSpecName: "kube-api-access-kknc6") pod "b2643e95-59cb-42a2-982e-96a7d732e5e4" (UID: "b2643e95-59cb-42a2-982e-96a7d732e5e4"). InnerVolumeSpecName "kube-api-access-kknc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.839489 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.870365 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2643e95-59cb-42a2-982e-96a7d732e5e4" (UID: "b2643e95-59cb-42a2-982e-96a7d732e5e4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.877144 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.926864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-httpd-run\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927296 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-logs\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927335 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927472 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-config-data\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927507 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-scripts\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927562 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-combined-ca-bundle\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-public-tls-certs\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.927628 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9k7h\" (UniqueName: \"kubernetes.io/projected/195fa02f-5887-4d8e-a103-2261e65a9c96-kube-api-access-s9k7h\") pod \"195fa02f-5887-4d8e-a103-2261e65a9c96\" (UID: \"195fa02f-5887-4d8e-a103-2261e65a9c96\") " Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.928160 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.928178 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.928187 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.928197 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.928204 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2643e95-59cb-42a2-982e-96a7d732e5e4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.928214 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kknc6\" (UniqueName: \"kubernetes.io/projected/b2643e95-59cb-42a2-982e-96a7d732e5e4-kube-api-access-kknc6\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.929544 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-logs" (OuterVolumeSpecName: "logs") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.943724 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.943953 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195fa02f-5887-4d8e-a103-2261e65a9c96-kube-api-access-s9k7h" (OuterVolumeSpecName: "kube-api-access-s9k7h") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "kube-api-access-s9k7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.944374 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-scripts" (OuterVolumeSpecName: "scripts") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.946402 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2643e95-59cb-42a2-982e-96a7d732e5e4" (UID: "b2643e95-59cb-42a2-982e-96a7d732e5e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.969839 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.994680 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-config-data" (OuterVolumeSpecName: "config-data") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.996465 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-config-data" (OuterVolumeSpecName: "config-data") pod "b2643e95-59cb-42a2-982e-96a7d732e5e4" (UID: "b2643e95-59cb-42a2-982e-96a7d732e5e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:34 crc kubenswrapper[4867]: I0126 11:37:34.999376 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "195fa02f-5887-4d8e-a103-2261e65a9c96" (UID: "195fa02f-5887-4d8e-a103-2261e65a9c96"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.029755 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data-custom\") pod \"edc2a642-41e4-4162-aa08-1cecd958b32c\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.029878 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-scripts\") pod \"edc2a642-41e4-4162-aa08-1cecd958b32c\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.029926 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data\") pod \"edc2a642-41e4-4162-aa08-1cecd958b32c\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.029995 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc2a642-41e4-4162-aa08-1cecd958b32c-etc-machine-id\") pod \"edc2a642-41e4-4162-aa08-1cecd958b32c\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5kh4\" (UniqueName: \"kubernetes.io/projected/edc2a642-41e4-4162-aa08-1cecd958b32c-kube-api-access-z5kh4\") pod \"edc2a642-41e4-4162-aa08-1cecd958b32c\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030029 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-combined-ca-bundle\") pod \"edc2a642-41e4-4162-aa08-1cecd958b32c\" (UID: \"edc2a642-41e4-4162-aa08-1cecd958b32c\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030459 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195fa02f-5887-4d8e-a103-2261e65a9c96-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030480 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030490 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030499 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030507 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030549 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030562 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195fa02f-5887-4d8e-a103-2261e65a9c96-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030573 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9k7h\" (UniqueName: \"kubernetes.io/projected/195fa02f-5887-4d8e-a103-2261e65a9c96-kube-api-access-s9k7h\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.030585 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2643e95-59cb-42a2-982e-96a7d732e5e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.037771 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc2a642-41e4-4162-aa08-1cecd958b32c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "edc2a642-41e4-4162-aa08-1cecd958b32c" (UID: "edc2a642-41e4-4162-aa08-1cecd958b32c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.053596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-scripts" (OuterVolumeSpecName: "scripts") pod "edc2a642-41e4-4162-aa08-1cecd958b32c" (UID: "edc2a642-41e4-4162-aa08-1cecd958b32c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.053642 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc2a642-41e4-4162-aa08-1cecd958b32c-kube-api-access-z5kh4" (OuterVolumeSpecName: "kube-api-access-z5kh4") pod "edc2a642-41e4-4162-aa08-1cecd958b32c" (UID: "edc2a642-41e4-4162-aa08-1cecd958b32c"). InnerVolumeSpecName "kube-api-access-z5kh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.069083 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.079292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "edc2a642-41e4-4162-aa08-1cecd958b32c" (UID: "edc2a642-41e4-4162-aa08-1cecd958b32c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.121342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc2a642-41e4-4162-aa08-1cecd958b32c" (UID: "edc2a642-41e4-4162-aa08-1cecd958b32c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.133620 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.133884 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.134537 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.134639 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc2a642-41e4-4162-aa08-1cecd958b32c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.134714 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5kh4\" (UniqueName: \"kubernetes.io/projected/edc2a642-41e4-4162-aa08-1cecd958b32c-kube-api-access-z5kh4\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.134783 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.174405 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data" (OuterVolumeSpecName: "config-data") pod "edc2a642-41e4-4162-aa08-1cecd958b32c" (UID: "edc2a642-41e4-4162-aa08-1cecd958b32c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.234704 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5668f68b6c-7674j"] Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.236140 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2a642-41e4-4162-aa08-1cecd958b32c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.428388 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd45cdb8b-tgbqw" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.428501 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd45cdb8b-tgbqw" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.756480 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.766609 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zvxt9"] Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769626 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-log" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769663 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-log" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769691 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a2ed9f-c519-444f-922f-4cebf5b3893e" containerName="mariadb-account-create-update" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769699 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a2ed9f-c519-444f-922f-4cebf5b3893e" containerName="mariadb-account-create-update" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769712 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769719 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769738 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="sg-core" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769745 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="sg-core" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769759 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="probe" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769765 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="probe" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769783 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-central-agent" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769788 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-central-agent" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769800 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="proxy-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769809 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="proxy-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769821 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769827 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769849 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="dnsmasq-dns" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769856 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="dnsmasq-dns" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769870 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="init" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769878 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="init" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769897 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="cinder-scheduler" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769907 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="cinder-scheduler" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769923 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-log" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769929 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-log" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769944 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769951 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769972 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-notification-agent" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769978 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-notification-agent" Jan 26 11:37:35 crc kubenswrapper[4867]: E0126 11:37:35.769991 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api-log" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.769996 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api-log" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.789843 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-notification-agent" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.789916 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.789942 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="ceilometer-central-agent" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.789949 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.789977 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" containerName="barbican-api-log" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.789991 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a2ed9f-c519-444f-922f-4cebf5b3893e" containerName="mariadb-account-create-update" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790014 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="probe" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790036 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" containerName="cinder-scheduler" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790045 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" containerName="glance-log" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790060 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="sg-core" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790084 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="227ae5b6-e7d6-45ce-b333-3dd508d56b35" containerName="dnsmasq-dns" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790091 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" containerName="proxy-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790099 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-httpd" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.790122 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerName="glance-log" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.803425 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.839385 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zvxt9"] Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862397 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-logs\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862582 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-config-data\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862660 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-httpd-run\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-internal-tls-certs\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjv49\" (UniqueName: \"kubernetes.io/projected/dfc017b6-886f-48d3-8f1e-cef59e587503-kube-api-access-qjv49\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862765 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-combined-ca-bundle\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.862832 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-scripts\") pod \"dfc017b6-886f-48d3-8f1e-cef59e587503\" (UID: \"dfc017b6-886f-48d3-8f1e-cef59e587503\") " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.863118 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bca6d10-3712-4078-885f-ff14590bbbe8-operator-scripts\") pod \"nova-api-db-create-zvxt9\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.863207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lwv\" (UniqueName: \"kubernetes.io/projected/0bca6d10-3712-4078-885f-ff14590bbbe8-kube-api-access-w4lwv\") pod \"nova-api-db-create-zvxt9\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.865649 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-logs" (OuterVolumeSpecName: "logs") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.865887 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.868272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5668f68b6c-7674j" event={"ID":"39829bfc-df9a-4123-a069-f99e3032615d","Type":"ContainerStarted","Data":"88398362ac2f07d2ec072bf459bfc1bc3c086bd5c577953657f557028f53f16b"} Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.868321 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5668f68b6c-7674j" event={"ID":"39829bfc-df9a-4123-a069-f99e3032615d","Type":"ContainerStarted","Data":"59923d70fa1fdc06d268a58387047dc4ba8f8c24fc17c8334d6d8a8b966e7207"} Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.870435 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-scripts" (OuterVolumeSpecName: "scripts") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.876757 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc017b6-886f-48d3-8f1e-cef59e587503-kube-api-access-qjv49" (OuterVolumeSpecName: "kube-api-access-qjv49") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "kube-api-access-qjv49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.879187 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.892029 4867 generic.go:334] "Generic (PLEG): container finished" podID="f114731c-0ed9-4d58-90f0-b670a856adf0" containerID="2883219d24b973f8fe6d4c2294356946b1c2a17b06b3a1dc9e38f248ee02bd4d" exitCode=0 Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.892130 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f459cfdcb-t5qhs" event={"ID":"f114731c-0ed9-4d58-90f0-b670a856adf0","Type":"ContainerDied","Data":"2883219d24b973f8fe6d4c2294356946b1c2a17b06b3a1dc9e38f248ee02bd4d"} Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.917728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"edc2a642-41e4-4162-aa08-1cecd958b32c","Type":"ContainerDied","Data":"c05928559a0b52dea96c1efd15bc47781c1ac77e81636bc416d302ecf14afc35"} Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.917783 4867 scope.go:117] "RemoveContainer" containerID="8201ba714254e8a26cfc2df5bb7a49512417f99d3cc37e33c15153214f82b6e1" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.917874 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.926901 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rhnn5"] Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.938131 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.966348 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rhnn5"] Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.966402 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.966421 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2643e95-59cb-42a2-982e-96a7d732e5e4","Type":"ContainerDied","Data":"c96b3bb0dd6c5bbfc3ebec2ce18e6f8ce55cb112a0555779ce32dadd93d91905"} Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.968982 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bca6d10-3712-4078-885f-ff14590bbbe8-operator-scripts\") pod \"nova-api-db-create-zvxt9\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.969167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lwv\" (UniqueName: \"kubernetes.io/projected/0bca6d10-3712-4078-885f-ff14590bbbe8-kube-api-access-w4lwv\") pod \"nova-api-db-create-zvxt9\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.969554 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.969579 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjv49\" (UniqueName: \"kubernetes.io/projected/dfc017b6-886f-48d3-8f1e-cef59e587503-kube-api-access-qjv49\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.969635 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.969656 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.969666 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfc017b6-886f-48d3-8f1e-cef59e587503-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.970871 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bca6d10-3712-4078-885f-ff14590bbbe8-operator-scripts\") pod \"nova-api-db-create-zvxt9\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.987530 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d424-account-create-update-ldcwb"] Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.989723 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.998584 4867 scope.go:117] "RemoveContainer" containerID="fd746aa8feaf83c5272fe2cb0448ed3ff38f2bef6118c79888bcfa4f05b2f688" Jan 26 11:37:35 crc kubenswrapper[4867]: I0126 11:37:35.999064 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.000834 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.001077 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d424-account-create-update-ldcwb"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.005357 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"195fa02f-5887-4d8e-a103-2261e65a9c96","Type":"ContainerDied","Data":"bd174a2d49c1dceefc61602f0660fa50abc5a093fcb3a322989b01dc4d95daee"} Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.005601 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.011651 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.022959 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lwv\" (UniqueName: \"kubernetes.io/projected/0bca6d10-3712-4078-885f-ff14590bbbe8-kube-api-access-w4lwv\") pod \"nova-api-db-create-zvxt9\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.030175 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.032451 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-config-data" (OuterVolumeSpecName: "config-data") pod "dfc017b6-886f-48d3-8f1e-cef59e587503" (UID: "dfc017b6-886f-48d3-8f1e-cef59e587503"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.047943 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jxjlp"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.049646 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.055011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0dba3b09-195d-416a-b4af-7f252c8abd0d","Type":"ContainerStarted","Data":"819efac7f6e4f6fd638d6bf9f8ccbdfc519d706ce06a7335974c5f6ecd8a8ec5"} Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.074454 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.075909 4867 generic.go:334] "Generic (PLEG): container finished" podID="dfc017b6-886f-48d3-8f1e-cef59e587503" containerID="55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd" exitCode=0 Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.076018 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dfc017b6-886f-48d3-8f1e-cef59e587503","Type":"ContainerDied","Data":"55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd"} Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.076051 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dfc017b6-886f-48d3-8f1e-cef59e587503","Type":"ContainerDied","Data":"10e5b02008d86a9a8053564f90665fce105450d0e3e22d188f48a133e847b52c"} Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.076107 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg8tv\" (UniqueName: \"kubernetes.io/projected/91e79247-9d54-4108-a975-17c7603c3f96-kube-api-access-xg8tv\") pod \"nova-api-d424-account-create-update-ldcwb\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.076147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rxc\" (UniqueName: \"kubernetes.io/projected/f576d352-22e9-427b-a2d1-81bff0a85eb1-kube-api-access-h6rxc\") pod \"nova-cell0-db-create-rhnn5\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.076174 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.076244 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f576d352-22e9-427b-a2d1-81bff0a85eb1-operator-scripts\") pod \"nova-cell0-db-create-rhnn5\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.076312 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e79247-9d54-4108-a975-17c7603c3f96-operator-scripts\") pod \"nova-api-d424-account-create-update-ldcwb\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.077153 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.078292 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.078323 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.078337 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfc017b6-886f-48d3-8f1e-cef59e587503-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.095316 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.130782 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jxjlp"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.132775 4867 generic.go:334] "Generic (PLEG): container finished" podID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerID="0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1" exitCode=0 Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.132956 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerDied","Data":"0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1"} Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.164215 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.176818 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.179488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f576d352-22e9-427b-a2d1-81bff0a85eb1-operator-scripts\") pod \"nova-cell0-db-create-rhnn5\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.180398 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f576d352-22e9-427b-a2d1-81bff0a85eb1-operator-scripts\") pod \"nova-cell0-db-create-rhnn5\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.180521 4867 scope.go:117] "RemoveContainer" containerID="4e6b9eacbbc4c8c4c89d557b0b32bc6cd5b66fe33a8f7cb7cc1d83ff1d513941" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.184386 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwnx\" (UniqueName: \"kubernetes.io/projected/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-kube-api-access-mtwnx\") pod \"nova-cell1-db-create-jxjlp\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.184477 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-operator-scripts\") pod \"nova-cell1-db-create-jxjlp\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.184618 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e79247-9d54-4108-a975-17c7603c3f96-operator-scripts\") pod \"nova-api-d424-account-create-update-ldcwb\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.184927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.185152 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg8tv\" (UniqueName: \"kubernetes.io/projected/91e79247-9d54-4108-a975-17c7603c3f96-kube-api-access-xg8tv\") pod \"nova-api-d424-account-create-update-ldcwb\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.185200 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rxc\" (UniqueName: \"kubernetes.io/projected/f576d352-22e9-427b-a2d1-81bff0a85eb1-kube-api-access-h6rxc\") pod \"nova-cell0-db-create-rhnn5\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.186002 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e79247-9d54-4108-a975-17c7603c3f96-operator-scripts\") pod \"nova-api-d424-account-create-update-ldcwb\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.193351 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.218869 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.252524 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rxc\" (UniqueName: \"kubernetes.io/projected/f576d352-22e9-427b-a2d1-81bff0a85eb1-kube-api-access-h6rxc\") pod \"nova-cell0-db-create-rhnn5\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.257058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg8tv\" (UniqueName: \"kubernetes.io/projected/91e79247-9d54-4108-a975-17c7603c3f96-kube-api-access-xg8tv\") pod \"nova-api-d424-account-create-update-ldcwb\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.261298 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-scripts\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcf75\" (UniqueName: \"kubernetes.io/projected/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-kube-api-access-fcf75\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwnx\" (UniqueName: \"kubernetes.io/projected/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-kube-api-access-mtwnx\") pod \"nova-cell1-db-create-jxjlp\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-config-data\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300214 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-operator-scripts\") pod \"nova-cell1-db-create-jxjlp\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300288 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.300375 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.301892 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-operator-scripts\") pod \"nova-cell1-db-create-jxjlp\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.305692 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.333289 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.336017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.336083 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.340831 4867 scope.go:117] "RemoveContainer" containerID="dd823c865671eed3b9056413ff43d7b65162230282ac78a467be7e9cfae7dccb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.348932 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.350033 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwnx\" (UniqueName: \"kubernetes.io/projected/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-kube-api-access-mtwnx\") pod \"nova-cell1-db-create-jxjlp\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.364009 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.403266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-scripts\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.403311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcf75\" (UniqueName: \"kubernetes.io/projected/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-kube-api-access-fcf75\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.403372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-config-data\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.403408 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.403430 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.403495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.411133 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.417447 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-config-data\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.417557 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.417961 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-scripts\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.418381 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.425978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.427359 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.391904186 podStartE2EDuration="20.427332555s" podCreationTimestamp="2026-01-26 11:37:16 +0000 UTC" firstStartedPulling="2026-01-26 11:37:17.255503961 +0000 UTC m=+1186.954078871" lastFinishedPulling="2026-01-26 11:37:34.29093233 +0000 UTC m=+1203.989507240" observedRunningTime="2026-01-26 11:37:36.131077823 +0000 UTC m=+1205.829652733" watchObservedRunningTime="2026-01-26 11:37:36.427332555 +0000 UTC m=+1206.125907465" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.440987 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcf75\" (UniqueName: \"kubernetes.io/projected/ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75-kube-api-access-fcf75\") pod \"cinder-scheduler-0\" (UID: \"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75\") " pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.470948 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.477168 4867 scope.go:117] "RemoveContainer" containerID="f0c7e3e53676b600a0f8387db3e43d43aaf869729a99557deb943b08f2fdfd33" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.479999 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.506625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-run-httpd\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.506677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97j7x\" (UniqueName: \"kubernetes.io/projected/2f4c7973-1227-4188-8be0-766b1fdcd108-kube-api-access-97j7x\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.506740 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-config-data\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.506767 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.506784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.506829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-log-httpd\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.506878 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-scripts\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.523270 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-bab4-account-create-update-zsjwg"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.524540 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.546407 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.617901 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-log-httpd\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.618594 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-log-httpd\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.618789 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-scripts\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.618827 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-operator-scripts\") pod \"nova-cell0-bab4-account-create-update-zsjwg\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.618953 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-run-httpd\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.619695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97j7x\" (UniqueName: \"kubernetes.io/projected/2f4c7973-1227-4188-8be0-766b1fdcd108-kube-api-access-97j7x\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.619731 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7j9t\" (UniqueName: \"kubernetes.io/projected/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-kube-api-access-g7j9t\") pod \"nova-cell0-bab4-account-create-update-zsjwg\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.619770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-run-httpd\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.620252 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-config-data\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.620298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.620472 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.630207 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-scripts\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.631545 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-config-data\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.632314 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.633943 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.651125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97j7x\" (UniqueName: \"kubernetes.io/projected/2f4c7973-1227-4188-8be0-766b1fdcd108-kube-api-access-97j7x\") pod \"ceilometer-0\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.651382 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.693474 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8434703b-0a5f-49f0-8877-2048d276f8ff" path="/var/lib/kubelet/pods/8434703b-0a5f-49f0-8877-2048d276f8ff/volumes" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.694103 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2643e95-59cb-42a2-982e-96a7d732e5e4" path="/var/lib/kubelet/pods/b2643e95-59cb-42a2-982e-96a7d732e5e4/volumes" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.694775 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc2a642-41e4-4162-aa08-1cecd958b32c" path="/var/lib/kubelet/pods/edc2a642-41e4-4162-aa08-1cecd958b32c/volumes" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.696398 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bab4-account-create-update-zsjwg"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.704962 4867 scope.go:117] "RemoveContainer" containerID="3176feeb6409c9c175520fd5f96b008015c911ba0fb09f7821c6dc2f0fc7ca48" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.713036 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.725157 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-operator-scripts\") pod \"nova-cell0-bab4-account-create-update-zsjwg\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.725230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7j9t\" (UniqueName: \"kubernetes.io/projected/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-kube-api-access-g7j9t\") pod \"nova-cell0-bab4-account-create-update-zsjwg\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.726474 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-operator-scripts\") pod \"nova-cell0-bab4-account-create-update-zsjwg\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.733164 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.747964 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.752383 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.758808 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.758951 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.759044 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wqrz8" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.759203 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.759420 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.761606 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7j9t\" (UniqueName: \"kubernetes.io/projected/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-kube-api-access-g7j9t\") pod \"nova-cell0-bab4-account-create-update-zsjwg\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.771900 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.795312 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.830384 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.834108 4867 scope.go:117] "RemoveContainer" containerID="1ae14c522c74fcc84e09c808640dccc7ff8db79b5fdfc21ea954926cf317bc83" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.862965 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.868333 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.870925 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.873359 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.878419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.882685 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.897432 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b41e-account-create-update-jbll2"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.899202 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.903675 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.931192 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b41e-account-create-update-jbll2"] Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj7ft\" (UniqueName: \"kubernetes.io/projected/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-kube-api-access-zj7ft\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934245 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934344 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-logs\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934370 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.934423 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.941598 4867 scope.go:117] "RemoveContainer" containerID="a3dfe0d4d3358e53f49d7ad7fd7ccc7dfe0122a9ec36359a54e66b3f1225b275" Jan 26 11:37:36 crc kubenswrapper[4867]: I0126 11:37:36.993532 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zvxt9"] Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.025105 4867 scope.go:117] "RemoveContainer" containerID="55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035678 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-logs\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035700 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-operator-scripts\") pod \"nova-cell1-b41e-account-create-update-jbll2\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035752 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035778 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035792 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035825 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035861 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035886 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-logs\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vmq\" (UniqueName: \"kubernetes.io/projected/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-kube-api-access-n4vmq\") pod \"nova-cell1-b41e-account-create-update-jbll2\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.035982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.036000 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj7ft\" (UniqueName: \"kubernetes.io/projected/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-kube-api-access-zj7ft\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.036042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.036063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.036086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r5jg\" (UniqueName: \"kubernetes.io/projected/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-kube-api-access-4r5jg\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.036590 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.036665 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.037374 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-logs\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.050600 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.051901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.054437 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.061067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.066126 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj7ft\" (UniqueName: \"kubernetes.io/projected/6fc7f66f-7989-42ac-a3c8-cd88b25f9c53-kube-api-access-zj7ft\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.087440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53\") " pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137328 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-operator-scripts\") pod \"nova-cell1-b41e-account-create-update-jbll2\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137395 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137478 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137505 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-logs\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137552 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vmq\" (UniqueName: \"kubernetes.io/projected/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-kube-api-access-n4vmq\") pod \"nova-cell1-b41e-account-create-update-jbll2\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137583 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137634 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137698 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137734 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r5jg\" (UniqueName: \"kubernetes.io/projected/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-kube-api-access-4r5jg\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.137776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.139386 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-operator-scripts\") pod \"nova-cell1-b41e-account-create-update-jbll2\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.139664 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.151692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-logs\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.152125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.156165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.157637 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.160584 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vmq\" (UniqueName: \"kubernetes.io/projected/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-kube-api-access-n4vmq\") pod \"nova-cell1-b41e-account-create-update-jbll2\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.164956 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.165433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.167119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f459cfdcb-t5qhs" event={"ID":"f114731c-0ed9-4d58-90f0-b670a856adf0","Type":"ContainerStarted","Data":"f4aedb58eadbb4243bbf9c9d935b7dc696064ee3049457309e0f4d604a3bd7e8"} Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.169871 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r5jg\" (UniqueName: \"kubernetes.io/projected/58cc3b2f-c49e-4c16-9a26-342c8b2c8878-kube-api-access-4r5jg\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.212432 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.214384 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerStarted","Data":"bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4"} Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.217951 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rhnn5"] Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.218209 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"58cc3b2f-c49e-4c16-9a26-342c8b2c8878\") " pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.222459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zvxt9" event={"ID":"0bca6d10-3712-4078-885f-ff14590bbbe8","Type":"ContainerStarted","Data":"69c6bb612cf6a174199c24e15f5036dc9dffd13d7900356d6946f599ec48a8d7"} Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.223628 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.231650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5668f68b6c-7674j" event={"ID":"39829bfc-df9a-4123-a069-f99e3032615d","Type":"ContainerStarted","Data":"ca20b41004dad80c0a9b6568f7d669c3d0bfa467ccf18ad9fc2bcf0f11eacecf"} Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.233639 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.233667 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.263191 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.308973 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5668f68b6c-7674j" podStartSLOduration=11.308950939 podStartE2EDuration="11.308950939s" podCreationTimestamp="2026-01-26 11:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:37.267951804 +0000 UTC m=+1206.966526714" watchObservedRunningTime="2026-01-26 11:37:37.308950939 +0000 UTC m=+1207.007525849" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.382305 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.398466 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.502028 4867 scope.go:117] "RemoveContainer" containerID="9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.543854 4867 scope.go:117] "RemoveContainer" containerID="55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd" Jan 26 11:37:37 crc kubenswrapper[4867]: E0126 11:37:37.547410 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd\": container with ID starting with 55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd not found: ID does not exist" containerID="55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.547479 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd"} err="failed to get container status \"55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd\": rpc error: code = NotFound desc = could not find container \"55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd\": container with ID starting with 55dc239744276d01319f51e765715f8fc0f8e79ceb6320f57be6b9eed29028cd not found: ID does not exist" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.547510 4867 scope.go:117] "RemoveContainer" containerID="9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a" Jan 26 11:37:37 crc kubenswrapper[4867]: E0126 11:37:37.551373 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a\": container with ID starting with 9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a not found: ID does not exist" containerID="9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.551421 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a"} err="failed to get container status \"9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a\": rpc error: code = NotFound desc = could not find container \"9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a\": container with ID starting with 9d781ab7589d1c0f4b7cf132d24fa1ff5ff23389d94774744cbe760ea7b50e1a not found: ID does not exist" Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.638357 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jxjlp"] Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.666048 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d424-account-create-update-ldcwb"] Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.683966 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.689575 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:37 crc kubenswrapper[4867]: I0126 11:37:37.906432 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bab4-account-create-update-zsjwg"] Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.265977 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d424-account-create-update-ldcwb" event={"ID":"91e79247-9d54-4108-a975-17c7603c3f96","Type":"ContainerStarted","Data":"73f67bb29293019f164a5e0742ef72a5fb719020f7ee7aa41799cff43d898459"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.269030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jxjlp" event={"ID":"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4","Type":"ContainerStarted","Data":"219240fb93929a7976abdd8f6c736937fb93b445125dbf06f71cdd62b1cbe2b6"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.271547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerStarted","Data":"76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.271690 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.273326 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zvxt9" event={"ID":"0bca6d10-3712-4078-885f-ff14590bbbe8","Type":"ContainerStarted","Data":"4636a0d19e0684652c3d8874bfa02095a5fdd63116c50b3fe8653a7be858fb7c"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.288607 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" event={"ID":"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af","Type":"ContainerStarted","Data":"f2517c6d3bea2b57f7e79e44fd90b34b600a50c9930578c85fbad09e374894cb"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.303763 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rhnn5" event={"ID":"f576d352-22e9-427b-a2d1-81bff0a85eb1","Type":"ContainerStarted","Data":"f59958d640359c197c16cd359f33cde51bd0e99c2268d005515674e6db964355"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.308334 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-6dc6f6fb68-dx2nc" podStartSLOduration=6.274853923 podStartE2EDuration="21.308314488s" podCreationTimestamp="2026-01-26 11:37:17 +0000 UTC" firstStartedPulling="2026-01-26 11:37:19.130954716 +0000 UTC m=+1188.829529626" lastFinishedPulling="2026-01-26 11:37:34.164415281 +0000 UTC m=+1203.862990191" observedRunningTime="2026-01-26 11:37:38.29940558 +0000 UTC m=+1207.997980720" watchObservedRunningTime="2026-01-26 11:37:38.308314488 +0000 UTC m=+1208.006889388" Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.316354 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75","Type":"ContainerStarted","Data":"7dd55a624c850265a899848669bbaecc6decf15a398039758550618fefceb643"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.323963 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerStarted","Data":"49d991eef2f1bfaa9f99d6aed12b341f83259bed4d70c34a436b9bc80bd19200"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.336901 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b41e-account-create-update-jbll2"] Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.395592 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5f459cfdcb-t5qhs" event={"ID":"f114731c-0ed9-4d58-90f0-b670a856adf0","Type":"ContainerStarted","Data":"710af774e7ebecd66c5714bd00e5797a28975219efcf94bf41fa48be05e667b3"} Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.396637 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.458030 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-5f459cfdcb-t5qhs" podStartSLOduration=7.457633948 podStartE2EDuration="18.458012176s" podCreationTimestamp="2026-01-26 11:37:20 +0000 UTC" firstStartedPulling="2026-01-26 11:37:23.323180824 +0000 UTC m=+1193.021755734" lastFinishedPulling="2026-01-26 11:37:34.323559052 +0000 UTC m=+1204.022133962" observedRunningTime="2026-01-26 11:37:38.42484305 +0000 UTC m=+1208.123417960" watchObservedRunningTime="2026-01-26 11:37:38.458012176 +0000 UTC m=+1208.156587086" Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.464237 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.550630 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.575474 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195fa02f-5887-4d8e-a103-2261e65a9c96" path="/var/lib/kubelet/pods/195fa02f-5887-4d8e-a103-2261e65a9c96/volumes" Jan 26 11:37:38 crc kubenswrapper[4867]: I0126 11:37:38.576481 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc017b6-886f-48d3-8f1e-cef59e587503" path="/var/lib/kubelet/pods/dfc017b6-886f-48d3-8f1e-cef59e587503/volumes" Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.417120 4867 generic.go:334] "Generic (PLEG): container finished" podID="0bca6d10-3712-4078-885f-ff14590bbbe8" containerID="4636a0d19e0684652c3d8874bfa02095a5fdd63116c50b3fe8653a7be858fb7c" exitCode=0 Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.417384 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zvxt9" event={"ID":"0bca6d10-3712-4078-885f-ff14590bbbe8","Type":"ContainerDied","Data":"4636a0d19e0684652c3d8874bfa02095a5fdd63116c50b3fe8653a7be858fb7c"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.423922 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53","Type":"ContainerStarted","Data":"05d656eafffc3da074c9b0aeb02a41c511edd3d2364fee3d204ed0d755cc5cdb"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.454280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" event={"ID":"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af","Type":"ContainerStarted","Data":"edefe5fa9f77f38dad2a162f47c38dfa150661aec613e44f236eaed1f74fe7b0"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.458397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75","Type":"ContainerStarted","Data":"dad941fb95cc5cb8086df3501809c9189c698ed087f8ff49515c569e8dd971b6"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.462624 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d424-account-create-update-ldcwb" event={"ID":"91e79247-9d54-4108-a975-17c7603c3f96","Type":"ContainerStarted","Data":"224001dc6b53cc4709a74659f966d8e760d90e26822b7291b488ce839e843158"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.475474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jxjlp" event={"ID":"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4","Type":"ContainerStarted","Data":"d84850c6e99bb1cb07f9e9a3294fa187af413dbbbab5b9a89c1d48b0eee20fdc"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.478448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerStarted","Data":"28b16594aebd74b931d1a3afbc9fe1493c2ecc892d0584d131a74abfb243380e"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.482487 4867 generic.go:334] "Generic (PLEG): container finished" podID="f576d352-22e9-427b-a2d1-81bff0a85eb1" containerID="748bfbd4b8c19b4b96f9c28ec3129fea256335879fd3e35db00865f8c6cc4910" exitCode=0 Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.482701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rhnn5" event={"ID":"f576d352-22e9-427b-a2d1-81bff0a85eb1","Type":"ContainerDied","Data":"748bfbd4b8c19b4b96f9c28ec3129fea256335879fd3e35db00865f8c6cc4910"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.485481 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" podStartSLOduration=3.485453495 podStartE2EDuration="3.485453495s" podCreationTimestamp="2026-01-26 11:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:39.469243422 +0000 UTC m=+1209.167818332" watchObservedRunningTime="2026-01-26 11:37:39.485453495 +0000 UTC m=+1209.184028405" Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.488884 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58cc3b2f-c49e-4c16-9a26-342c8b2c8878","Type":"ContainerStarted","Data":"ca35d11a242ae6e4173cc7bc399be280c7962b4934e185bfd25cdca98f3fa0eb"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.496555 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" event={"ID":"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a","Type":"ContainerStarted","Data":"c4346b76358d61eef41e7eb32623539e21c4bd95540c30c0a5fba2dd48a383d1"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.497295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" event={"ID":"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a","Type":"ContainerStarted","Data":"c87a3678648db932c1a37a3a5958f066559868a1d4a01ba6c303c0ba2b8654d5"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.509503 4867 generic.go:334] "Generic (PLEG): container finished" podID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerID="76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa" exitCode=1 Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.509753 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerDied","Data":"76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa"} Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.511742 4867 scope.go:117] "RemoveContainer" containerID="76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa" Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.535040 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-d424-account-create-update-ldcwb" podStartSLOduration=4.535012758 podStartE2EDuration="4.535012758s" podCreationTimestamp="2026-01-26 11:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:39.485926248 +0000 UTC m=+1209.184501158" watchObservedRunningTime="2026-01-26 11:37:39.535012758 +0000 UTC m=+1209.233587668" Jan 26 11:37:39 crc kubenswrapper[4867]: I0126 11:37:39.611043 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" podStartSLOduration=3.611004948 podStartE2EDuration="3.611004948s" podCreationTimestamp="2026-01-26 11:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:39.601830403 +0000 UTC m=+1209.300405333" watchObservedRunningTime="2026-01-26 11:37:39.611004948 +0000 UTC m=+1209.309579858" Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.335214 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-647b685f9-49zj6" Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.424200 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bc467f664-6zfb4"] Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.424491 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bc467f664-6zfb4" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-api" containerID="cri-o://4f6c92f19483e300995347185d9e8324fad1c1ce13368cce52ea9815196b3c31" gracePeriod=30 Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.424970 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bc467f664-6zfb4" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-httpd" containerID="cri-o://e845a45e251d9a54a725547beeb75eb2ecc8c510a9f1dd145012f4ff427bd21b" gracePeriod=30 Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.534416 4867 generic.go:334] "Generic (PLEG): container finished" podID="db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a" containerID="c4346b76358d61eef41e7eb32623539e21c4bd95540c30c0a5fba2dd48a383d1" exitCode=0 Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.534533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" event={"ID":"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a","Type":"ContainerDied","Data":"c4346b76358d61eef41e7eb32623539e21c4bd95540c30c0a5fba2dd48a383d1"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.579824 4867 generic.go:334] "Generic (PLEG): container finished" podID="10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af" containerID="edefe5fa9f77f38dad2a162f47c38dfa150661aec613e44f236eaed1f74fe7b0" exitCode=0 Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.597957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerStarted","Data":"ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.598006 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" event={"ID":"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af","Type":"ContainerDied","Data":"edefe5fa9f77f38dad2a162f47c38dfa150661aec613e44f236eaed1f74fe7b0"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.598024 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75","Type":"ContainerStarted","Data":"e61b23d5fd5fbeab7d2e99d0e5dc742dae700033515282655011ac1394c4f815"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.601454 4867 generic.go:334] "Generic (PLEG): container finished" podID="91e79247-9d54-4108-a975-17c7603c3f96" containerID="224001dc6b53cc4709a74659f966d8e760d90e26822b7291b488ce839e843158" exitCode=0 Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.601540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d424-account-create-update-ldcwb" event={"ID":"91e79247-9d54-4108-a975-17c7603c3f96","Type":"ContainerDied","Data":"224001dc6b53cc4709a74659f966d8e760d90e26822b7291b488ce839e843158"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.608621 4867 generic.go:334] "Generic (PLEG): container finished" podID="7f4d3e01-1c2e-45ae-952f-c05b658b2aa4" containerID="d84850c6e99bb1cb07f9e9a3294fa187af413dbbbab5b9a89c1d48b0eee20fdc" exitCode=0 Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.608741 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jxjlp" event={"ID":"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4","Type":"ContainerDied","Data":"d84850c6e99bb1cb07f9e9a3294fa187af413dbbbab5b9a89c1d48b0eee20fdc"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.611832 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53","Type":"ContainerStarted","Data":"eb5afffea57e3591f543569e9a90bfcf20088b03bf6f8f31899b1d451f8f680b"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.620930 4867 generic.go:334] "Generic (PLEG): container finished" podID="a2167905-2856-4125-81fd-a2430fe558f9" containerID="1a4d115ab295e9eb8edfb2102fe14586cbb812f3d1c01ec1525e6027a548e3ec" exitCode=1 Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.621004 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerDied","Data":"1a4d115ab295e9eb8edfb2102fe14586cbb812f3d1c01ec1525e6027a548e3ec"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.621745 4867 scope.go:117] "RemoveContainer" containerID="1a4d115ab295e9eb8edfb2102fe14586cbb812f3d1c01ec1525e6027a548e3ec" Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.634778 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58cc3b2f-c49e-4c16-9a26-342c8b2c8878","Type":"ContainerStarted","Data":"339a474cf04b4b69dc709fac6db03830fe5df250db98046875172320efdc43cd"} Jan 26 11:37:40 crc kubenswrapper[4867]: I0126 11:37:40.883837 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.88379949 podStartE2EDuration="4.88379949s" podCreationTimestamp="2026-01-26 11:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:40.877360167 +0000 UTC m=+1210.575935077" watchObservedRunningTime="2026-01-26 11:37:40.88379949 +0000 UTC m=+1210.582374410" Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.385482 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.415092 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5668f68b6c-7674j" Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.653358 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.679638 4867 generic.go:334] "Generic (PLEG): container finished" podID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerID="e845a45e251d9a54a725547beeb75eb2ecc8c510a9f1dd145012f4ff427bd21b" exitCode=0 Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.679726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc467f664-6zfb4" event={"ID":"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa","Type":"ContainerDied","Data":"e845a45e251d9a54a725547beeb75eb2ecc8c510a9f1dd145012f4ff427bd21b"} Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.691557 4867 generic.go:334] "Generic (PLEG): container finished" podID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerID="ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1" exitCode=1 Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.692879 4867 scope.go:117] "RemoveContainer" containerID="ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1" Jan 26 11:37:41 crc kubenswrapper[4867]: E0126 11:37:41.693157 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6dc6f6fb68-dx2nc_openstack(01f2f326-18ee-4ee2-823b-09ccf4cfefc1)\"" pod="openstack/ironic-6dc6f6fb68-dx2nc" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.693585 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerDied","Data":"ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1"} Jan 26 11:37:41 crc kubenswrapper[4867]: I0126 11:37:41.693642 4867 scope.go:117] "RemoveContainer" containerID="76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa" Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.456644 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-5f459cfdcb-t5qhs" Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.532738 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6dc6f6fb68-dx2nc"] Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.701266 4867 scope.go:117] "RemoveContainer" containerID="ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1" Jan 26 11:37:42 crc kubenswrapper[4867]: E0126 11:37:42.701498 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6dc6f6fb68-dx2nc_openstack(01f2f326-18ee-4ee2-823b-09ccf4cfefc1)\"" pod="openstack/ironic-6dc6f6fb68-dx2nc" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.962503 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.971314 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-256sm"] Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.973207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.985839 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-256sm"] Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.986055 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Jan 26 11:37:42 crc kubenswrapper[4867]: I0126 11:37:42.986332 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.027336 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.028141 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.125264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkv7x\" (UniqueName: \"kubernetes.io/projected/586082ca-8462-421f-940d-25a9e1a9e945-kube-api-access-qkv7x\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.125336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-scripts\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.125447 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-combined-ca-bundle\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.125482 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-config\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.125565 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.125600 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.125629 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/586082ca-8462-421f-940d-25a9e1a9e945-etc-podinfo\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.154147 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.154246 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.227670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.227770 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.227826 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/586082ca-8462-421f-940d-25a9e1a9e945-etc-podinfo\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.227864 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkv7x\" (UniqueName: \"kubernetes.io/projected/586082ca-8462-421f-940d-25a9e1a9e945-kube-api-access-qkv7x\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.227914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-scripts\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.228028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-combined-ca-bundle\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.228076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-config\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.228493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.229256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.235759 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-combined-ca-bundle\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.249342 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-config\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.255711 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-scripts\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.256072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkv7x\" (UniqueName: \"kubernetes.io/projected/586082ca-8462-421f-940d-25a9e1a9e945-kube-api-access-qkv7x\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.257348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/586082ca-8462-421f-940d-25a9e1a9e945-etc-podinfo\") pod \"ironic-inspector-db-sync-256sm\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.298887 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:37:43 crc kubenswrapper[4867]: I0126 11:37:43.720057 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-6dc6f6fb68-dx2nc" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api-log" containerID="cri-o://bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4" gracePeriod=60 Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.022504 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.030081 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.040963 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.112722 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.142887 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.143515 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.149367 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4vmq\" (UniqueName: \"kubernetes.io/projected/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-kube-api-access-n4vmq\") pod \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.149606 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6rxc\" (UniqueName: \"kubernetes.io/projected/f576d352-22e9-427b-a2d1-81bff0a85eb1-kube-api-access-h6rxc\") pod \"f576d352-22e9-427b-a2d1-81bff0a85eb1\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.149818 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bca6d10-3712-4078-885f-ff14590bbbe8-operator-scripts\") pod \"0bca6d10-3712-4078-885f-ff14590bbbe8\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.149849 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-operator-scripts\") pod \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\" (UID: \"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.149885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f576d352-22e9-427b-a2d1-81bff0a85eb1-operator-scripts\") pod \"f576d352-22e9-427b-a2d1-81bff0a85eb1\" (UID: \"f576d352-22e9-427b-a2d1-81bff0a85eb1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.149923 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4lwv\" (UniqueName: \"kubernetes.io/projected/0bca6d10-3712-4078-885f-ff14590bbbe8-kube-api-access-w4lwv\") pod \"0bca6d10-3712-4078-885f-ff14590bbbe8\" (UID: \"0bca6d10-3712-4078-885f-ff14590bbbe8\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.150681 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f576d352-22e9-427b-a2d1-81bff0a85eb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f576d352-22e9-427b-a2d1-81bff0a85eb1" (UID: "f576d352-22e9-427b-a2d1-81bff0a85eb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.150766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bca6d10-3712-4078-885f-ff14590bbbe8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bca6d10-3712-4078-885f-ff14590bbbe8" (UID: "0bca6d10-3712-4078-885f-ff14590bbbe8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.150948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a" (UID: "db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.164080 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bca6d10-3712-4078-885f-ff14590bbbe8-kube-api-access-w4lwv" (OuterVolumeSpecName: "kube-api-access-w4lwv") pod "0bca6d10-3712-4078-885f-ff14590bbbe8" (UID: "0bca6d10-3712-4078-885f-ff14590bbbe8"). InnerVolumeSpecName "kube-api-access-w4lwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.165045 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f576d352-22e9-427b-a2d1-81bff0a85eb1-kube-api-access-h6rxc" (OuterVolumeSpecName: "kube-api-access-h6rxc") pod "f576d352-22e9-427b-a2d1-81bff0a85eb1" (UID: "f576d352-22e9-427b-a2d1-81bff0a85eb1"). InnerVolumeSpecName "kube-api-access-h6rxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.172758 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-kube-api-access-n4vmq" (OuterVolumeSpecName: "kube-api-access-n4vmq") pod "db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a" (UID: "db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a"). InnerVolumeSpecName "kube-api-access-n4vmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.271204 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7j9t\" (UniqueName: \"kubernetes.io/projected/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-kube-api-access-g7j9t\") pod \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.271703 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e79247-9d54-4108-a975-17c7603c3f96-operator-scripts\") pod \"91e79247-9d54-4108-a975-17c7603c3f96\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.271754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtwnx\" (UniqueName: \"kubernetes.io/projected/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-kube-api-access-mtwnx\") pod \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.271789 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-operator-scripts\") pod \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\" (UID: \"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.271829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-operator-scripts\") pod \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\" (UID: \"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.271851 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg8tv\" (UniqueName: \"kubernetes.io/projected/91e79247-9d54-4108-a975-17c7603c3f96-kube-api-access-xg8tv\") pod \"91e79247-9d54-4108-a975-17c7603c3f96\" (UID: \"91e79247-9d54-4108-a975-17c7603c3f96\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.272341 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4vmq\" (UniqueName: \"kubernetes.io/projected/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-kube-api-access-n4vmq\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.272353 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6rxc\" (UniqueName: \"kubernetes.io/projected/f576d352-22e9-427b-a2d1-81bff0a85eb1-kube-api-access-h6rxc\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.272363 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bca6d10-3712-4078-885f-ff14590bbbe8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.272372 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.272381 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f576d352-22e9-427b-a2d1-81bff0a85eb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.272397 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4lwv\" (UniqueName: \"kubernetes.io/projected/0bca6d10-3712-4078-885f-ff14590bbbe8-kube-api-access-w4lwv\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.272547 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e79247-9d54-4108-a975-17c7603c3f96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91e79247-9d54-4108-a975-17c7603c3f96" (UID: "91e79247-9d54-4108-a975-17c7603c3f96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.273168 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f4d3e01-1c2e-45ae-952f-c05b658b2aa4" (UID: "7f4d3e01-1c2e-45ae-952f-c05b658b2aa4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.273797 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af" (UID: "10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.313156 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e79247-9d54-4108-a975-17c7603c3f96-kube-api-access-xg8tv" (OuterVolumeSpecName: "kube-api-access-xg8tv") pod "91e79247-9d54-4108-a975-17c7603c3f96" (UID: "91e79247-9d54-4108-a975-17c7603c3f96"). InnerVolumeSpecName "kube-api-access-xg8tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.314779 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-kube-api-access-g7j9t" (OuterVolumeSpecName: "kube-api-access-g7j9t") pod "10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af" (UID: "10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af"). InnerVolumeSpecName "kube-api-access-g7j9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.317026 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-kube-api-access-mtwnx" (OuterVolumeSpecName: "kube-api-access-mtwnx") pod "7f4d3e01-1c2e-45ae-952f-c05b658b2aa4" (UID: "7f4d3e01-1c2e-45ae-952f-c05b658b2aa4"). InnerVolumeSpecName "kube-api-access-mtwnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.374533 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91e79247-9d54-4108-a975-17c7603c3f96-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.374577 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtwnx\" (UniqueName: \"kubernetes.io/projected/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-kube-api-access-mtwnx\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.374590 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.374604 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.374617 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg8tv\" (UniqueName: \"kubernetes.io/projected/91e79247-9d54-4108-a975-17c7603c3f96-kube-api-access-xg8tv\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.374627 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7j9t\" (UniqueName: \"kubernetes.io/projected/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af-kube-api-access-g7j9t\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.488945 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.581036 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-merged\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.582708 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.582804 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfxtl\" (UniqueName: \"kubernetes.io/projected/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-kube-api-access-vfxtl\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.582890 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-etc-podinfo\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.584531 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-custom\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.585257 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-scripts\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.585315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-combined-ca-bundle\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.585451 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.585538 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-logs\") pod \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\" (UID: \"01f2f326-18ee-4ee2-823b-09ccf4cfefc1\") " Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.587618 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-logs" (OuterVolumeSpecName: "logs") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.592759 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.592780 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.594319 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.601056 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-scripts" (OuterVolumeSpecName: "scripts") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.605447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-kube-api-access-vfxtl" (OuterVolumeSpecName: "kube-api-access-vfxtl") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "kube-api-access-vfxtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.630369 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.691799 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data" (OuterVolumeSpecName: "config-data") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.698890 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfxtl\" (UniqueName: \"kubernetes.io/projected/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-kube-api-access-vfxtl\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.698931 4867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.698943 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.698957 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.698967 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.740972 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01f2f326-18ee-4ee2-823b-09ccf4cfefc1" (UID: "01f2f326-18ee-4ee2-823b-09ccf4cfefc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.758800 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" event={"ID":"db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a","Type":"ContainerDied","Data":"c87a3678648db932c1a37a3a5958f066559868a1d4a01ba6c303c0ba2b8654d5"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.758991 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87a3678648db932c1a37a3a5958f066559868a1d4a01ba6c303c0ba2b8654d5" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.758943 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b41e-account-create-update-jbll2" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.765975 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jxjlp" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.766617 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jxjlp" event={"ID":"7f4d3e01-1c2e-45ae-952f-c05b658b2aa4","Type":"ContainerDied","Data":"219240fb93929a7976abdd8f6c736937fb93b445125dbf06f71cdd62b1cbe2b6"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.766656 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="219240fb93929a7976abdd8f6c736937fb93b445125dbf06f71cdd62b1cbe2b6" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.771562 4867 generic.go:334] "Generic (PLEG): container finished" podID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerID="bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4" exitCode=143 Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.771686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerDied","Data":"bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.771716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6dc6f6fb68-dx2nc" event={"ID":"01f2f326-18ee-4ee2-823b-09ccf4cfefc1","Type":"ContainerDied","Data":"79ec0e03da860e28c17203449a672f3b76242d5a8c9a15cbb4fafd5ce38be6ce"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.771738 4867 scope.go:117] "RemoveContainer" containerID="ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.771844 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6dc6f6fb68-dx2nc" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.778472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zvxt9" event={"ID":"0bca6d10-3712-4078-885f-ff14590bbbe8","Type":"ContainerDied","Data":"69c6bb612cf6a174199c24e15f5036dc9dffd13d7900356d6946f599ec48a8d7"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.778507 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c6bb612cf6a174199c24e15f5036dc9dffd13d7900356d6946f599ec48a8d7" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.778558 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zvxt9" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.789728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" event={"ID":"10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af","Type":"ContainerDied","Data":"f2517c6d3bea2b57f7e79e44fd90b34b600a50c9930578c85fbad09e374894cb"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.789789 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2517c6d3bea2b57f7e79e44fd90b34b600a50c9930578c85fbad09e374894cb" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.789863 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bab4-account-create-update-zsjwg" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.801146 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f2f326-18ee-4ee2-823b-09ccf4cfefc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.815073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerStarted","Data":"a2e9f8e363aa32e579cf2c88675ca9cdcb820dd35c57d4c32969e5dbd147d51c"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.815595 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.821280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rhnn5" event={"ID":"f576d352-22e9-427b-a2d1-81bff0a85eb1","Type":"ContainerDied","Data":"f59958d640359c197c16cd359f33cde51bd0e99c2268d005515674e6db964355"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.821316 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f59958d640359c197c16cd359f33cde51bd0e99c2268d005515674e6db964355" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.821402 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhnn5" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.827131 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d424-account-create-update-ldcwb" event={"ID":"91e79247-9d54-4108-a975-17c7603c3f96","Type":"ContainerDied","Data":"73f67bb29293019f164a5e0742ef72a5fb719020f7ee7aa41799cff43d898459"} Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.827171 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f67bb29293019f164a5e0742ef72a5fb719020f7ee7aa41799cff43d898459" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.827251 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d424-account-create-update-ldcwb" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.897271 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-256sm"] Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.907765 4867 scope.go:117] "RemoveContainer" containerID="bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.947510 4867 scope.go:117] "RemoveContainer" containerID="0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.948145 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6dc6f6fb68-dx2nc"] Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.962038 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-6dc6f6fb68-dx2nc"] Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.995452 4867 scope.go:117] "RemoveContainer" containerID="ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1" Jan 26 11:37:44 crc kubenswrapper[4867]: E0126 11:37:44.997129 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1\": container with ID starting with ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1 not found: ID does not exist" containerID="ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.997186 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1"} err="failed to get container status \"ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1\": rpc error: code = NotFound desc = could not find container \"ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1\": container with ID starting with ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1 not found: ID does not exist" Jan 26 11:37:44 crc kubenswrapper[4867]: I0126 11:37:44.997240 4867 scope.go:117] "RemoveContainer" containerID="bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4" Jan 26 11:37:45 crc kubenswrapper[4867]: E0126 11:37:45.001253 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4\": container with ID starting with bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4 not found: ID does not exist" containerID="bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4" Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.001306 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4"} err="failed to get container status \"bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4\": rpc error: code = NotFound desc = could not find container \"bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4\": container with ID starting with bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4 not found: ID does not exist" Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.001341 4867 scope.go:117] "RemoveContainer" containerID="0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1" Jan 26 11:37:45 crc kubenswrapper[4867]: E0126 11:37:45.001755 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1\": container with ID starting with 0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1 not found: ID does not exist" containerID="0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1" Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.001797 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1"} err="failed to get container status \"0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1\": rpc error: code = NotFound desc = could not find container \"0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1\": container with ID starting with 0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1 not found: ID does not exist" Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.850453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fc7f66f-7989-42ac-a3c8-cd88b25f9c53","Type":"ContainerStarted","Data":"1407ae9b173c816d24d0a4eeb93cf09177600ef646cde71ff0836cb5cd3cad87"} Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.857459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58cc3b2f-c49e-4c16-9a26-342c8b2c8878","Type":"ContainerStarted","Data":"eb301614d84a95a8b58578b9603265579c312fdeff76ef79fad56129484e60d8"} Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.858804 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-256sm" event={"ID":"586082ca-8462-421f-940d-25a9e1a9e945","Type":"ContainerStarted","Data":"3c969ade96a4675f85e24eab913d88c7b597842034923e3f4bbd0a92867a8e5d"} Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.862766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerStarted","Data":"cde8955787e9969f60a51eeb221b5ee8c78568bd0a74b38e2138e607f21f48cd"} Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.881956 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.88193685 podStartE2EDuration="9.88193685s" podCreationTimestamp="2026-01-26 11:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:45.874196753 +0000 UTC m=+1215.572771663" watchObservedRunningTime="2026-01-26 11:37:45.88193685 +0000 UTC m=+1215.580511760" Jan 26 11:37:45 crc kubenswrapper[4867]: I0126 11:37:45.908092 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.908061107 podStartE2EDuration="9.908061107s" podCreationTimestamp="2026-01-26 11:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:37:45.899759546 +0000 UTC m=+1215.598334456" watchObservedRunningTime="2026-01-26 11:37:45.908061107 +0000 UTC m=+1215.606636017" Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.230380 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bca6d10_3712_4078_885f_ff14590bbbe8.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bca6d10_3712_4078_885f_ff14590bbbe8.slice: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.231531 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1.scope WatchSource:0}: Error finding container 0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1: Status 404 returned error can't find the container with id 0ab92e7529c004a8b1260a771503d54719c3d925bc313b142ad7ce0bbf3864c1 Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.232332 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf576d352_22e9_427b_a2d1_81bff0a85eb1.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf576d352_22e9_427b_a2d1_81bff0a85eb1.slice: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.232355 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91e79247_9d54_4108_a975_17c7603c3f96.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91e79247_9d54_4108_a975_17c7603c3f96.slice: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.254771 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f4d3e01_1c2e_45ae_952f_c05b658b2aa4.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f4d3e01_1c2e_45ae_952f_c05b658b2aa4.slice: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.257864 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10fb1d1b_a85c_4eb8_a5ae_04d49b5ef7af.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10fb1d1b_a85c_4eb8_a5ae_04d49b5ef7af.slice: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.257914 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-conmon-bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-conmon-bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4.scope: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.257932 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-bb6253075fc5609488246c651539bc8d261a13127a66230d6cb26775d78486a4.scope: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.304198 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3da9ad_c4e2_4dc6_aec5_fefa3d9efa8a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3da9ad_c4e2_4dc6_aec5_fefa3d9efa8a.slice: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.304839 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-conmon-76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-conmon-76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa.scope: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.304864 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-76137802ba0024d949385afabbbef9d670cc6c9ee46b21078f76e6b2346e25fa.scope: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.304897 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-conmon-ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-conmon-ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1.scope: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: W0126 11:37:46.304913 4867 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f2f326_18ee_4ee2_823b_09ccf4cfefc1.slice/crio-ada1468fc474d33da06e1dadeba350e51cdb04042380f220a328f22f0f9a4cd1.scope: no such file or directory Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.388469 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh5xw"] Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389164 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bca6d10-3712-4078-885f-ff14590bbbe8" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389175 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bca6d10-3712-4078-885f-ff14590bbbe8" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389191 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389197 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389206 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389263 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389272 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="init" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389278 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="init" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389288 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389294 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389323 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e79247-9d54-4108-a975-17c7603c3f96" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389332 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e79247-9d54-4108-a975-17c7603c3f96" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389343 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f576d352-22e9-427b-a2d1-81bff0a85eb1" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389351 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f576d352-22e9-427b-a2d1-81bff0a85eb1" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389361 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389369 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389400 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api-log" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389406 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api-log" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.389418 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4d3e01-1c2e-45ae-952f-c05b658b2aa4" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389424 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4d3e01-1c2e-45ae-952f-c05b658b2aa4" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389605 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389625 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e79247-9d54-4108-a975-17c7603c3f96" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389634 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4d3e01-1c2e-45ae-952f-c05b658b2aa4" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389644 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389662 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api-log" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389676 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f576d352-22e9-427b-a2d1-81bff0a85eb1" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389682 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" containerName="ironic-api" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389692 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bca6d10-3712-4078-885f-ff14590bbbe8" containerName="mariadb-database-create" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.389702 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af" containerName="mariadb-account-create-update" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.390456 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.409341 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.409642 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.410643 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sj95f" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.421740 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh5xw"] Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.440543 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-scripts\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.440632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whpv\" (UniqueName: \"kubernetes.io/projected/39653949-816a-4237-91ab-e0a3cbdc1ff9-kube-api-access-4whpv\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.440732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-config-data\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.440753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.544132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-scripts\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.544202 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whpv\" (UniqueName: \"kubernetes.io/projected/39653949-816a-4237-91ab-e0a3cbdc1ff9-kube-api-access-4whpv\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.544291 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.544313 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-config-data\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.551939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-scripts\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.553212 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.555382 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-config-data\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.567484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whpv\" (UniqueName: \"kubernetes.io/projected/39653949-816a-4237-91ab-e0a3cbdc1ff9-kube-api-access-4whpv\") pod \"nova-cell0-conductor-db-sync-lh5xw\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.584091 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f2f326-18ee-4ee2-823b-09ccf4cfefc1" path="/var/lib/kubelet/pods/01f2f326-18ee-4ee2-823b-09ccf4cfefc1/volumes" Jan 26 11:37:46 crc kubenswrapper[4867]: E0126 11:37:46.644245 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2167905_2856_4125_81fd_a2430fe558f9.slice/crio-conmon-1a4d115ab295e9eb8edfb2102fe14586cbb812f3d1c01ec1525e6027a548e3ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2167905_2856_4125_81fd_a2430fe558f9.slice/crio-1a4d115ab295e9eb8edfb2102fe14586cbb812f3d1c01ec1525e6027a548e3ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d02b67_bed1_4363_b9a0_e89a8733149b.slice/crio-5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c3a4bd_76c9_4a0b_b252_775ce7ebc2aa.slice/crio-conmon-e845a45e251d9a54a725547beeb75eb2ecc8c510a9f1dd145012f4ff427bd21b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c3a4bd_76c9_4a0b_b252_775ce7ebc2aa.slice/crio-conmon-4f6c92f19483e300995347185d9e8324fad1c1ce13368cce52ea9815196b3c31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c3a4bd_76c9_4a0b_b252_775ce7ebc2aa.slice/crio-e845a45e251d9a54a725547beeb75eb2ecc8c510a9f1dd145012f4ff427bd21b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d02b67_bed1_4363_b9a0_e89a8733149b.slice/crio-conmon-5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c3a4bd_76c9_4a0b_b252_775ce7ebc2aa.slice/crio-4f6c92f19483e300995347185d9e8324fad1c1ce13368cce52ea9815196b3c31.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.756960 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.908071 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.934379 4867 generic.go:334] "Generic (PLEG): container finished" podID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerID="4f6c92f19483e300995347185d9e8324fad1c1ce13368cce52ea9815196b3c31" exitCode=0 Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.934454 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc467f664-6zfb4" event={"ID":"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa","Type":"ContainerDied","Data":"4f6c92f19483e300995347185d9e8324fad1c1ce13368cce52ea9815196b3c31"} Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.938770 4867 generic.go:334] "Generic (PLEG): container finished" podID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerID="5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd" exitCode=137 Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.938863 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.938893 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d02b67-bed1-4363-b9a0-e89a8733149b","Type":"ContainerDied","Data":"5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd"} Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.938926 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d02b67-bed1-4363-b9a0-e89a8733149b","Type":"ContainerDied","Data":"ab462e1f969463bdefcbfc9781df2637c7bb65117875fa84254f3362fdd22c0c"} Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.938948 4867 scope.go:117] "RemoveContainer" containerID="5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.948914 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerStarted","Data":"9a215babac91b10ca8a37abe89cf9202617c9a3aa29c51b4eb2c5c6ab3de9118"} Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969022 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-scripts\") pod \"90d02b67-bed1-4363-b9a0-e89a8733149b\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969132 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data-custom\") pod \"90d02b67-bed1-4363-b9a0-e89a8733149b\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d02b67-bed1-4363-b9a0-e89a8733149b-etc-machine-id\") pod \"90d02b67-bed1-4363-b9a0-e89a8733149b\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969266 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-combined-ca-bundle\") pod \"90d02b67-bed1-4363-b9a0-e89a8733149b\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data\") pod \"90d02b67-bed1-4363-b9a0-e89a8733149b\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969420 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-825g9\" (UniqueName: \"kubernetes.io/projected/90d02b67-bed1-4363-b9a0-e89a8733149b-kube-api-access-825g9\") pod \"90d02b67-bed1-4363-b9a0-e89a8733149b\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969535 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d02b67-bed1-4363-b9a0-e89a8733149b-logs\") pod \"90d02b67-bed1-4363-b9a0-e89a8733149b\" (UID: \"90d02b67-bed1-4363-b9a0-e89a8733149b\") " Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.969663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90d02b67-bed1-4363-b9a0-e89a8733149b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "90d02b67-bed1-4363-b9a0-e89a8733149b" (UID: "90d02b67-bed1-4363-b9a0-e89a8733149b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.970388 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d02b67-bed1-4363-b9a0-e89a8733149b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.970782 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d02b67-bed1-4363-b9a0-e89a8733149b-logs" (OuterVolumeSpecName: "logs") pod "90d02b67-bed1-4363-b9a0-e89a8733149b" (UID: "90d02b67-bed1-4363-b9a0-e89a8733149b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:37:46 crc kubenswrapper[4867]: I0126 11:37:46.983833 4867 scope.go:117] "RemoveContainer" containerID="5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.003013 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d02b67-bed1-4363-b9a0-e89a8733149b-kube-api-access-825g9" (OuterVolumeSpecName: "kube-api-access-825g9") pod "90d02b67-bed1-4363-b9a0-e89a8733149b" (UID: "90d02b67-bed1-4363-b9a0-e89a8733149b"). InnerVolumeSpecName "kube-api-access-825g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.007326 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-scripts" (OuterVolumeSpecName: "scripts") pod "90d02b67-bed1-4363-b9a0-e89a8733149b" (UID: "90d02b67-bed1-4363-b9a0-e89a8733149b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.021814 4867 scope.go:117] "RemoveContainer" containerID="5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd" Jan 26 11:37:47 crc kubenswrapper[4867]: E0126 11:37:47.026325 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd\": container with ID starting with 5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd not found: ID does not exist" containerID="5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.026442 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd"} err="failed to get container status \"5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd\": rpc error: code = NotFound desc = could not find container \"5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd\": container with ID starting with 5d4242ed5c0a0c3c38c554ace36cf18dad78771d7faee45cf73da6fe854c94bd not found: ID does not exist" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.026496 4867 scope.go:117] "RemoveContainer" containerID="5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b" Jan 26 11:37:47 crc kubenswrapper[4867]: E0126 11:37:47.026910 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b\": container with ID starting with 5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b not found: ID does not exist" containerID="5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.026931 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b"} err="failed to get container status \"5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b\": rpc error: code = NotFound desc = could not find container \"5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b\": container with ID starting with 5de9491d2ebac6bd06a9f15cc99fcccf0dcad5710151d090cead5941c02c7a9b not found: ID does not exist" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.028114 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.044403 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "90d02b67-bed1-4363-b9a0-e89a8733149b" (UID: "90d02b67-bed1-4363-b9a0-e89a8733149b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.058559 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90d02b67-bed1-4363-b9a0-e89a8733149b" (UID: "90d02b67-bed1-4363-b9a0-e89a8733149b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.082575 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-combined-ca-bundle\") pod \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.082864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6zd8\" (UniqueName: \"kubernetes.io/projected/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-kube-api-access-m6zd8\") pod \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.083065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-config\") pod \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.083254 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-httpd-config\") pod \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.083441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-ovndb-tls-certs\") pod \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\" (UID: \"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa\") " Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.084148 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-825g9\" (UniqueName: \"kubernetes.io/projected/90d02b67-bed1-4363-b9a0-e89a8733149b-kube-api-access-825g9\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.084486 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d02b67-bed1-4363-b9a0-e89a8733149b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.084576 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.084657 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.084749 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.094105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" (UID: "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.102963 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-kube-api-access-m6zd8" (OuterVolumeSpecName: "kube-api-access-m6zd8") pod "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" (UID: "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa"). InnerVolumeSpecName "kube-api-access-m6zd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.112656 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.168332 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data" (OuterVolumeSpecName: "config-data") pod "90d02b67-bed1-4363-b9a0-e89a8733149b" (UID: "90d02b67-bed1-4363-b9a0-e89a8733149b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.191939 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.192289 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d02b67-bed1-4363-b9a0-e89a8733149b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.192390 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6zd8\" (UniqueName: \"kubernetes.io/projected/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-kube-api-access-m6zd8\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.198393 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" (UID: "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.215649 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-config" (OuterVolumeSpecName: "config") pod "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" (UID: "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.223104 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" (UID: "42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.223822 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.223861 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.271800 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.287304 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh5xw"] Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.287443 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.298395 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.298677 4867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.298822 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.383592 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.383960 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.409271 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.421453 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.431131 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:47 crc kubenswrapper[4867]: E0126 11:37:47.431729 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-api" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.431822 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-api" Jan 26 11:37:47 crc kubenswrapper[4867]: E0126 11:37:47.431896 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-httpd" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.431947 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-httpd" Jan 26 11:37:47 crc kubenswrapper[4867]: E0126 11:37:47.432013 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api-log" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.432062 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api-log" Jan 26 11:37:47 crc kubenswrapper[4867]: E0126 11:37:47.432116 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.432166 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.432911 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-api" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.433007 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" containerName="neutron-httpd" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.433091 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.433166 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api-log" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.434723 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.442634 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.443559 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.443711 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.448888 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.462644 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.508658 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.511741 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a9a8906-54d6-49c2-94c7-393167d8db56-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.512993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-config-data\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.513232 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.513342 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.513536 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-scripts\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.513597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-config-data-custom\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.513619 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9a8906-54d6-49c2-94c7-393167d8db56-logs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.513889 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.513955 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4x98\" (UniqueName: \"kubernetes.io/projected/4a9a8906-54d6-49c2-94c7-393167d8db56-kube-api-access-d4x98\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619468 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4x98\" (UniqueName: \"kubernetes.io/projected/4a9a8906-54d6-49c2-94c7-393167d8db56-kube-api-access-d4x98\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a9a8906-54d6-49c2-94c7-393167d8db56-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-config-data\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a9a8906-54d6-49c2-94c7-393167d8db56-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-scripts\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619830 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-config-data-custom\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619847 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9a8906-54d6-49c2-94c7-393167d8db56-logs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.619902 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.622136 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a9a8906-54d6-49c2-94c7-393167d8db56-logs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.628110 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-scripts\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.634832 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.637825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.638276 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-config-data\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.638485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4x98\" (UniqueName: \"kubernetes.io/projected/4a9a8906-54d6-49c2-94c7-393167d8db56-kube-api-access-d4x98\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.639427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-config-data-custom\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.639706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a9a8906-54d6-49c2-94c7-393167d8db56-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4a9a8906-54d6-49c2-94c7-393167d8db56\") " pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.781497 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.985443 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" event={"ID":"39653949-816a-4237-91ab-e0a3cbdc1ff9","Type":"ContainerStarted","Data":"b37fafae3f697c5624c4abfa439b26e94e2eceeebcff9124f4ecc3f976120e8b"} Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.998872 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bc467f664-6zfb4" Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.998853 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bc467f664-6zfb4" event={"ID":"42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa","Type":"ContainerDied","Data":"6d090696888377934a5a07ff59f0258e1e6a8dbff8e2207b576311041a6ad02c"} Jan 26 11:37:47 crc kubenswrapper[4867]: I0126 11:37:47.998937 4867 scope.go:117] "RemoveContainer" containerID="e845a45e251d9a54a725547beeb75eb2ecc8c510a9f1dd145012f4ff427bd21b" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.003393 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.003441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.003455 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.003465 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.054000 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bc467f664-6zfb4"] Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.064949 4867 scope.go:117] "RemoveContainer" containerID="4f6c92f19483e300995347185d9e8324fad1c1ce13368cce52ea9815196b3c31" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.072777 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7bc467f664-6zfb4"] Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.073830 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.349815 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.578136 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa" path="/var/lib/kubelet/pods/42c3a4bd-76c9-4a0b-b252-775ce7ebc2aa/volumes" Jan 26 11:37:48 crc kubenswrapper[4867]: I0126 11:37:48.578768 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" path="/var/lib/kubelet/pods/90d02b67-bed1-4363-b9a0-e89a8733149b/volumes" Jan 26 11:37:49 crc kubenswrapper[4867]: I0126 11:37:49.054637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a9a8906-54d6-49c2-94c7-393167d8db56","Type":"ContainerStarted","Data":"3566ffe3c51cc6b808548e3829b63ed6a32a448fa12f438eebd78d4170c8663c"} Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.070108 4867 generic.go:334] "Generic (PLEG): container finished" podID="a2167905-2856-4125-81fd-a2430fe558f9" containerID="a2e9f8e363aa32e579cf2c88675ca9cdcb820dd35c57d4c32969e5dbd147d51c" exitCode=1 Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.070191 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerDied","Data":"a2e9f8e363aa32e579cf2c88675ca9cdcb820dd35c57d4c32969e5dbd147d51c"} Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.070548 4867 scope.go:117] "RemoveContainer" containerID="1a4d115ab295e9eb8edfb2102fe14586cbb812f3d1c01ec1525e6027a548e3ec" Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.071203 4867 scope.go:117] "RemoveContainer" containerID="a2e9f8e363aa32e579cf2c88675ca9cdcb820dd35c57d4c32969e5dbd147d51c" Jan 26 11:37:50 crc kubenswrapper[4867]: E0126 11:37:50.071506 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.073211 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.074356 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a9a8906-54d6-49c2-94c7-393167d8db56","Type":"ContainerStarted","Data":"b9e6a5eed2c57521543179b1cb1dd881fcb87303d2923571085d8b343fa6faf1"} Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.500729 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:50 crc kubenswrapper[4867]: I0126 11:37:50.635697 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:37:51 crc kubenswrapper[4867]: I0126 11:37:51.545799 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="90d02b67-bed1-4363-b9a0-e89a8733149b" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": dial tcp 10.217.0.165:8776: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 26 11:37:52 crc kubenswrapper[4867]: I0126 11:37:52.070085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 11:37:52 crc kubenswrapper[4867]: I0126 11:37:52.730745 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 11:37:53 crc kubenswrapper[4867]: I0126 11:37:53.027390 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:53 crc kubenswrapper[4867]: I0126 11:37:53.027439 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:37:53 crc kubenswrapper[4867]: I0126 11:37:53.028078 4867 scope.go:117] "RemoveContainer" containerID="a2e9f8e363aa32e579cf2c88675ca9cdcb820dd35c57d4c32969e5dbd147d51c" Jan 26 11:37:53 crc kubenswrapper[4867]: E0126 11:37:53.028293 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:38:04 crc kubenswrapper[4867]: E0126 11:38:04.239899 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" Jan 26 11:38:04 crc kubenswrapper[4867]: E0126 11:38:04.240780 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ironic-python-agent-init,Image:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/var/lib/ironic/httpboot,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-ironic,ReadOnly:false,MountPath:/var/lib/ironic,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/var/lib/config-data/custom,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmfjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-conductor-0_openstack(1a985fff-3d59-40fa-9cae-fd0f2cc9de70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:38:04 crc kubenswrapper[4867]: E0126 11:38:04.241866 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-python-agent-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ironic-conductor-0" podUID="1a985fff-3d59-40fa-9cae-fd0f2cc9de70" Jan 26 11:38:06 crc kubenswrapper[4867]: I0126 11:38:06.564480 4867 scope.go:117] "RemoveContainer" containerID="a2e9f8e363aa32e579cf2c88675ca9cdcb820dd35c57d4c32969e5dbd147d51c" Jan 26 11:38:07 crc kubenswrapper[4867]: E0126 11:38:07.102025 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9fd83ee511d8c43a61c2c78acf4f32b4149c0106c1613d702ef84f1de74201c4: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-nova-conductor/blobs/sha256:9fd83ee511d8c43a61c2c78acf4f32b4149c0106c1613d702ef84f1de74201c4\": context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 26 11:38:07 crc kubenswrapper[4867]: E0126 11:38:07.102512 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4whpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-lh5xw_openstack(39653949-816a-4237-91ab-e0a3cbdc1ff9): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9fd83ee511d8c43a61c2c78acf4f32b4149c0106c1613d702ef84f1de74201c4: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-nova-conductor/blobs/sha256:9fd83ee511d8c43a61c2c78acf4f32b4149c0106c1613d702ef84f1de74201c4\": context canceled" logger="UnhandledError" Jan 26 11:38:07 crc kubenswrapper[4867]: E0126 11:38:07.103599 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9fd83ee511d8c43a61c2c78acf4f32b4149c0106c1613d702ef84f1de74201c4: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-nova-conductor/blobs/sha256:9fd83ee511d8c43a61c2c78acf4f32b4149c0106c1613d702ef84f1de74201c4\\\": context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" podUID="39653949-816a-4237-91ab-e0a3cbdc1ff9" Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.263185 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerStarted","Data":"ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6"} Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.263767 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.274858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-256sm" event={"ID":"586082ca-8462-421f-940d-25a9e1a9e945","Type":"ContainerStarted","Data":"ed359852da0c074cd41d462bd95988d2343c54a093dd140c08dae814d734a4ee"} Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.289967 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-central-agent" containerID="cri-o://28b16594aebd74b931d1a3afbc9fe1493c2ecc892d0584d131a74abfb243380e" gracePeriod=30 Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.290259 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerStarted","Data":"26e9e451ae91f4253df1646fe4946e41b8df8c49b705fbe25320970913bf42f2"} Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.290274 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="sg-core" containerID="cri-o://9a215babac91b10ca8a37abe89cf9202617c9a3aa29c51b4eb2c5c6ab3de9118" gracePeriod=30 Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.290402 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="proxy-httpd" containerID="cri-o://26e9e451ae91f4253df1646fe4946e41b8df8c49b705fbe25320970913bf42f2" gracePeriod=30 Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.290491 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.290443 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-notification-agent" containerID="cri-o://cde8955787e9969f60a51eeb221b5ee8c78568bd0a74b38e2138e607f21f48cd" gracePeriod=30 Jan 26 11:38:07 crc kubenswrapper[4867]: E0126 11:38:07.294940 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" podUID="39653949-816a-4237-91ab-e0a3cbdc1ff9" Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.308186 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-256sm" podStartSLOduration=3.403704507 podStartE2EDuration="25.308147667s" podCreationTimestamp="2026-01-26 11:37:42 +0000 UTC" firstStartedPulling="2026-01-26 11:37:44.947687409 +0000 UTC m=+1214.646262319" lastFinishedPulling="2026-01-26 11:38:06.852130569 +0000 UTC m=+1236.550705479" observedRunningTime="2026-01-26 11:38:07.302525707 +0000 UTC m=+1237.001100617" watchObservedRunningTime="2026-01-26 11:38:07.308147667 +0000 UTC m=+1237.006722577" Jan 26 11:38:07 crc kubenswrapper[4867]: I0126 11:38:07.339929 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=18.620361127 podStartE2EDuration="31.339911795s" podCreationTimestamp="2026-01-26 11:37:36 +0000 UTC" firstStartedPulling="2026-01-26 11:37:37.827974161 +0000 UTC m=+1207.526549081" lastFinishedPulling="2026-01-26 11:37:50.547524839 +0000 UTC m=+1220.246099749" observedRunningTime="2026-01-26 11:38:07.333300771 +0000 UTC m=+1237.031875681" watchObservedRunningTime="2026-01-26 11:38:07.339911795 +0000 UTC m=+1237.038486715" Jan 26 11:38:08 crc kubenswrapper[4867]: I0126 11:38:08.309506 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a9a8906-54d6-49c2-94c7-393167d8db56","Type":"ContainerStarted","Data":"10f6313fad05e08be0742350945ba8ecf0af78bd43cc97525259f75c71a3bc16"} Jan 26 11:38:08 crc kubenswrapper[4867]: I0126 11:38:08.310066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 11:38:08 crc kubenswrapper[4867]: I0126 11:38:08.328729 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=21.328707634 podStartE2EDuration="21.328707634s" podCreationTimestamp="2026-01-26 11:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:38:08.327644019 +0000 UTC m=+1238.026218929" watchObservedRunningTime="2026-01-26 11:38:08.328707634 +0000 UTC m=+1238.027282544" Jan 26 11:38:08 crc kubenswrapper[4867]: I0126 11:38:08.334559 4867 generic.go:334] "Generic (PLEG): container finished" podID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerID="9a215babac91b10ca8a37abe89cf9202617c9a3aa29c51b4eb2c5c6ab3de9118" exitCode=2 Jan 26 11:38:08 crc kubenswrapper[4867]: I0126 11:38:08.334592 4867 generic.go:334] "Generic (PLEG): container finished" podID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerID="28b16594aebd74b931d1a3afbc9fe1493c2ecc892d0584d131a74abfb243380e" exitCode=0 Jan 26 11:38:08 crc kubenswrapper[4867]: I0126 11:38:08.335290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerDied","Data":"9a215babac91b10ca8a37abe89cf9202617c9a3aa29c51b4eb2c5c6ab3de9118"} Jan 26 11:38:08 crc kubenswrapper[4867]: I0126 11:38:08.335346 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerDied","Data":"28b16594aebd74b931d1a3afbc9fe1493c2ecc892d0584d131a74abfb243380e"} Jan 26 11:38:09 crc kubenswrapper[4867]: I0126 11:38:09.383522 4867 generic.go:334] "Generic (PLEG): container finished" podID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerID="cde8955787e9969f60a51eeb221b5ee8c78568bd0a74b38e2138e607f21f48cd" exitCode=0 Jan 26 11:38:09 crc kubenswrapper[4867]: I0126 11:38:09.384901 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerDied","Data":"cde8955787e9969f60a51eeb221b5ee8c78568bd0a74b38e2138e607f21f48cd"} Jan 26 11:38:11 crc kubenswrapper[4867]: I0126 11:38:11.403920 4867 generic.go:334] "Generic (PLEG): container finished" podID="a2167905-2856-4125-81fd-a2430fe558f9" containerID="ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6" exitCode=1 Jan 26 11:38:11 crc kubenswrapper[4867]: I0126 11:38:11.403961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerDied","Data":"ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6"} Jan 26 11:38:11 crc kubenswrapper[4867]: I0126 11:38:11.405168 4867 scope.go:117] "RemoveContainer" containerID="a2e9f8e363aa32e579cf2c88675ca9cdcb820dd35c57d4c32969e5dbd147d51c" Jan 26 11:38:11 crc kubenswrapper[4867]: I0126 11:38:11.406099 4867 scope.go:117] "RemoveContainer" containerID="ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6" Jan 26 11:38:11 crc kubenswrapper[4867]: E0126 11:38:11.406567 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:38:13 crc kubenswrapper[4867]: I0126 11:38:13.028419 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:38:13 crc kubenswrapper[4867]: I0126 11:38:13.029278 4867 scope.go:117] "RemoveContainer" containerID="ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6" Jan 26 11:38:13 crc kubenswrapper[4867]: E0126 11:38:13.029529 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:38:13 crc kubenswrapper[4867]: I0126 11:38:13.425710 4867 generic.go:334] "Generic (PLEG): container finished" podID="586082ca-8462-421f-940d-25a9e1a9e945" containerID="ed359852da0c074cd41d462bd95988d2343c54a093dd140c08dae814d734a4ee" exitCode=0 Jan 26 11:38:13 crc kubenswrapper[4867]: I0126 11:38:13.425773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-256sm" event={"ID":"586082ca-8462-421f-940d-25a9e1a9e945","Type":"ContainerDied","Data":"ed359852da0c074cd41d462bd95988d2343c54a093dd140c08dae814d734a4ee"} Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.773994 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.844816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.906859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkv7x\" (UniqueName: \"kubernetes.io/projected/586082ca-8462-421f-940d-25a9e1a9e945-kube-api-access-qkv7x\") pod \"586082ca-8462-421f-940d-25a9e1a9e945\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.907310 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"586082ca-8462-421f-940d-25a9e1a9e945\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.907511 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-combined-ca-bundle\") pod \"586082ca-8462-421f-940d-25a9e1a9e945\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.907630 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-config\") pod \"586082ca-8462-421f-940d-25a9e1a9e945\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.907760 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/586082ca-8462-421f-940d-25a9e1a9e945-etc-podinfo\") pod \"586082ca-8462-421f-940d-25a9e1a9e945\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.907906 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-scripts\") pod \"586082ca-8462-421f-940d-25a9e1a9e945\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.907989 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic\") pod \"586082ca-8462-421f-940d-25a9e1a9e945\" (UID: \"586082ca-8462-421f-940d-25a9e1a9e945\") " Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.907644 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "586082ca-8462-421f-940d-25a9e1a9e945" (UID: "586082ca-8462-421f-940d-25a9e1a9e945"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.909722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "586082ca-8462-421f-940d-25a9e1a9e945" (UID: "586082ca-8462-421f-940d-25a9e1a9e945"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.927987 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/586082ca-8462-421f-940d-25a9e1a9e945-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "586082ca-8462-421f-940d-25a9e1a9e945" (UID: "586082ca-8462-421f-940d-25a9e1a9e945"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.929421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586082ca-8462-421f-940d-25a9e1a9e945-kube-api-access-qkv7x" (OuterVolumeSpecName: "kube-api-access-qkv7x") pod "586082ca-8462-421f-940d-25a9e1a9e945" (UID: "586082ca-8462-421f-940d-25a9e1a9e945"). InnerVolumeSpecName "kube-api-access-qkv7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.933125 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-scripts" (OuterVolumeSpecName: "scripts") pod "586082ca-8462-421f-940d-25a9e1a9e945" (UID: "586082ca-8462-421f-940d-25a9e1a9e945"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.956874 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "586082ca-8462-421f-940d-25a9e1a9e945" (UID: "586082ca-8462-421f-940d-25a9e1a9e945"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:14 crc kubenswrapper[4867]: I0126 11:38:14.963188 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-config" (OuterVolumeSpecName: "config") pod "586082ca-8462-421f-940d-25a9e1a9e945" (UID: "586082ca-8462-421f-940d-25a9e1a9e945"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.014072 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.014117 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.014131 4867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/586082ca-8462-421f-940d-25a9e1a9e945-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.014142 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/586082ca-8462-421f-940d-25a9e1a9e945-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.014154 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.014164 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkv7x\" (UniqueName: \"kubernetes.io/projected/586082ca-8462-421f-940d-25a9e1a9e945-kube-api-access-qkv7x\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.014176 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/586082ca-8462-421f-940d-25a9e1a9e945-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.443085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-256sm" event={"ID":"586082ca-8462-421f-940d-25a9e1a9e945","Type":"ContainerDied","Data":"3c969ade96a4675f85e24eab913d88c7b597842034923e3f4bbd0a92867a8e5d"} Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.443370 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c969ade96a4675f85e24eab913d88c7b597842034923e3f4bbd0a92867a8e5d" Jan 26 11:38:15 crc kubenswrapper[4867]: I0126 11:38:15.443453 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-256sm" Jan 26 11:38:16 crc kubenswrapper[4867]: I0126 11:38:16.453032 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerStarted","Data":"77f0ff78374f145b8fec1630a1a17ac1e1d6490d7d19bd36141aff80ef222a10"} Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.461483 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:17 crc kubenswrapper[4867]: E0126 11:38:17.462487 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586082ca-8462-421f-940d-25a9e1a9e945" containerName="ironic-inspector-db-sync" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.462503 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="586082ca-8462-421f-940d-25a9e1a9e945" containerName="ironic-inspector-db-sync" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.462895 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="586082ca-8462-421f-940d-25a9e1a9e945" containerName="ironic-inspector-db-sync" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.499901 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.500008 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.513039 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.513250 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.660196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.660249 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.660279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-config\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.660328 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.660354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.660391 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-scripts\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.660412 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cph9k\" (UniqueName: \"kubernetes.io/projected/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-kube-api-access-cph9k\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.762402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.762456 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.762500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-config\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.762560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.762594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.762648 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-scripts\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.762675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cph9k\" (UniqueName: \"kubernetes.io/projected/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-kube-api-access-cph9k\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.763071 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.763138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.776981 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.777165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.779006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-scripts\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.782079 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-config\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.784442 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cph9k\" (UniqueName: \"kubernetes.io/projected/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-kube-api-access-cph9k\") pod \"ironic-inspector-0\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:17 crc kubenswrapper[4867]: I0126 11:38:17.828850 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 26 11:38:18 crc kubenswrapper[4867]: I0126 11:38:18.369503 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:18 crc kubenswrapper[4867]: I0126 11:38:18.508386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1","Type":"ContainerStarted","Data":"a35847143a394875a33cf46e132697127791912733288b431a2c3bffe715608a"} Jan 26 11:38:19 crc kubenswrapper[4867]: I0126 11:38:19.934956 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:20 crc kubenswrapper[4867]: I0126 11:38:20.537381 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" containerID="c5770fbe3635675ff19e16d56e2741b924d0b9647f8bd9e2b5c3333295d469bf" exitCode=0 Jan 26 11:38:20 crc kubenswrapper[4867]: I0126 11:38:20.537428 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1","Type":"ContainerDied","Data":"c5770fbe3635675ff19e16d56e2741b924d0b9647f8bd9e2b5c3333295d469bf"} Jan 26 11:38:22 crc kubenswrapper[4867]: I0126 11:38:22.827440 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="4a9a8906-54d6-49c2-94c7-393167d8db56" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.187:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:38:22 crc kubenswrapper[4867]: I0126 11:38:22.827487 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="4a9a8906-54d6-49c2-94c7-393167d8db56" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.187:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.508707 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.570887 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.576363 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1","Type":"ContainerDied","Data":"a35847143a394875a33cf46e132697127791912733288b431a2c3bffe715608a"} Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.576405 4867 scope.go:117] "RemoveContainer" containerID="c5770fbe3635675ff19e16d56e2741b924d0b9647f8bd9e2b5c3333295d469bf" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.600436 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic\") pod \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.600492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.600550 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-etc-podinfo\") pod \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.600685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cph9k\" (UniqueName: \"kubernetes.io/projected/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-kube-api-access-cph9k\") pod \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.600737 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-config\") pod \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.600818 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-scripts\") pod \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.600860 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-combined-ca-bundle\") pod \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\" (UID: \"7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1\") " Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.604854 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" (UID: "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.606628 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" (UID: "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.607289 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-scripts" (OuterVolumeSpecName: "scripts") pod "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" (UID: "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.607424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" (UID: "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.607968 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-kube-api-access-cph9k" (OuterVolumeSpecName: "kube-api-access-cph9k") pod "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" (UID: "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1"). InnerVolumeSpecName "kube-api-access-cph9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.610445 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-config" (OuterVolumeSpecName: "config") pod "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" (UID: "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.633267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" (UID: "7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.703640 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.703690 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.703707 4867 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-etc-podinfo\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.703722 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cph9k\" (UniqueName: \"kubernetes.io/projected/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-kube-api-access-cph9k\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.703735 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.703745 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.703755 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.940154 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.951510 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.959279 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:24 crc kubenswrapper[4867]: E0126 11:38:24.959952 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" containerName="ironic-python-agent-init" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.960032 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" containerName="ironic-python-agent-init" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.960298 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" containerName="ironic-python-agent-init" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.973061 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.976586 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.976768 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.980002 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Jan 26 11:38:24 crc kubenswrapper[4867]: I0126 11:38:24.995904 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.009747 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.111810 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.111884 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zg2x\" (UniqueName: \"kubernetes.io/projected/6e49ec18-452c-47df-a0c9-ea52cdced830-kube-api-access-4zg2x\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.112042 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6e49ec18-452c-47df-a0c9-ea52cdced830-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.112104 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6e49ec18-452c-47df-a0c9-ea52cdced830-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.112340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.112388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-config\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.112509 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.112555 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6e49ec18-452c-47df-a0c9-ea52cdced830-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.112593 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-scripts\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-config\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214233 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6e49ec18-452c-47df-a0c9-ea52cdced830-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214256 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-scripts\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zg2x\" (UniqueName: \"kubernetes.io/projected/6e49ec18-452c-47df-a0c9-ea52cdced830-kube-api-access-4zg2x\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214366 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6e49ec18-452c-47df-a0c9-ea52cdced830-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6e49ec18-452c-47df-a0c9-ea52cdced830-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214898 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6e49ec18-452c-47df-a0c9-ea52cdced830-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.214981 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6e49ec18-452c-47df-a0c9-ea52cdced830-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.219108 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-scripts\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.239768 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6e49ec18-452c-47df-a0c9-ea52cdced830-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.241104 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.241122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.241793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-config\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.309709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e49ec18-452c-47df-a0c9-ea52cdced830-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.309762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zg2x\" (UniqueName: \"kubernetes.io/projected/6e49ec18-452c-47df-a0c9-ea52cdced830-kube-api-access-4zg2x\") pod \"ironic-inspector-0\" (UID: \"6e49ec18-452c-47df-a0c9-ea52cdced830\") " pod="openstack/ironic-inspector-0" Jan 26 11:38:25 crc kubenswrapper[4867]: I0126 11:38:25.591897 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Jan 26 11:38:26 crc kubenswrapper[4867]: I0126 11:38:26.096751 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Jan 26 11:38:26 crc kubenswrapper[4867]: I0126 11:38:26.564307 4867 scope.go:117] "RemoveContainer" containerID="ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6" Jan 26 11:38:26 crc kubenswrapper[4867]: E0126 11:38:26.564742 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:38:26 crc kubenswrapper[4867]: I0126 11:38:26.577838 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1" path="/var/lib/kubelet/pods/7ba03bb4-2087-43f7-a0ff-6ac2135c7fd1/volumes" Jan 26 11:38:26 crc kubenswrapper[4867]: I0126 11:38:26.593238 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a985fff-3d59-40fa-9cae-fd0f2cc9de70" containerID="77f0ff78374f145b8fec1630a1a17ac1e1d6490d7d19bd36141aff80ef222a10" exitCode=0 Jan 26 11:38:26 crc kubenswrapper[4867]: I0126 11:38:26.593262 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerDied","Data":"77f0ff78374f145b8fec1630a1a17ac1e1d6490d7d19bd36141aff80ef222a10"} Jan 26 11:38:26 crc kubenswrapper[4867]: I0126 11:38:26.594478 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"4c2fe5e5f99e8bc12956c9917c138beec52da1a768229f42885231582925b97c"} Jan 26 11:38:26 crc kubenswrapper[4867]: I0126 11:38:26.693544 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.179:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:38:27 crc kubenswrapper[4867]: I0126 11:38:27.604558 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerID="c9a0466994bb283d88809ed631a5e6f66d79c86b8300339569a4d8e7bf0f83c0" exitCode=0 Jan 26 11:38:27 crc kubenswrapper[4867]: I0126 11:38:27.604628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerDied","Data":"c9a0466994bb283d88809ed631a5e6f66d79c86b8300339569a4d8e7bf0f83c0"} Jan 26 11:38:34 crc kubenswrapper[4867]: I0126 11:38:34.672702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"9b056130541da80d034f603664257252a9baa44cf2c0cdf2e2e82957f9e7f4c3"} Jan 26 11:38:34 crc kubenswrapper[4867]: I0126 11:38:34.675571 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" event={"ID":"39653949-816a-4237-91ab-e0a3cbdc1ff9","Type":"ContainerStarted","Data":"ce71285d4baf1e7a59b451bb335bb3a4518efd9973a1febff4f9e67e53860a51"} Jan 26 11:38:34 crc kubenswrapper[4867]: I0126 11:38:34.705798 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" podStartSLOduration=2.048014513 podStartE2EDuration="48.705773489s" podCreationTimestamp="2026-01-26 11:37:46 +0000 UTC" firstStartedPulling="2026-01-26 11:37:47.344020557 +0000 UTC m=+1217.042595457" lastFinishedPulling="2026-01-26 11:38:34.001779523 +0000 UTC m=+1263.700354433" observedRunningTime="2026-01-26 11:38:34.697803903 +0000 UTC m=+1264.396378813" watchObservedRunningTime="2026-01-26 11:38:34.705773489 +0000 UTC m=+1264.404348399" Jan 26 11:38:35 crc kubenswrapper[4867]: I0126 11:38:35.688638 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerStarted","Data":"37bf71d4a517ad0db7d30f38d0886922114a28c3628b12b2538206039f6ba59b"} Jan 26 11:38:35 crc kubenswrapper[4867]: I0126 11:38:35.691120 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerID="9b056130541da80d034f603664257252a9baa44cf2c0cdf2e2e82957f9e7f4c3" exitCode=0 Jan 26 11:38:35 crc kubenswrapper[4867]: I0126 11:38:35.691168 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerDied","Data":"9b056130541da80d034f603664257252a9baa44cf2c0cdf2e2e82957f9e7f4c3"} Jan 26 11:38:36 crc kubenswrapper[4867]: I0126 11:38:36.706262 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"cfeea3057bae114fd13506c33a165d60eec01943572fe2a84ddd58eacd66462b"} Jan 26 11:38:36 crc kubenswrapper[4867]: I0126 11:38:36.771551 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:38:37 crc kubenswrapper[4867]: I0126 11:38:37.765900 4867 generic.go:334] "Generic (PLEG): container finished" podID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerID="26e9e451ae91f4253df1646fe4946e41b8df8c49b705fbe25320970913bf42f2" exitCode=137 Jan 26 11:38:37 crc kubenswrapper[4867]: I0126 11:38:37.765935 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerDied","Data":"26e9e451ae91f4253df1646fe4946e41b8df8c49b705fbe25320970913bf42f2"} Jan 26 11:38:37 crc kubenswrapper[4867]: I0126 11:38:37.774449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"c700aec4349e8d29fdff26e046d199307f56e30ca4ffdfec33909ac3ae1f9f54"} Jan 26 11:38:37 crc kubenswrapper[4867]: I0126 11:38:37.774490 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"7588da4b61c4d8e49c5984ad02a0cfabc66fde2275c7f3311aa13d97f8d362dd"} Jan 26 11:38:37 crc kubenswrapper[4867]: I0126 11:38:37.887249 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.018689 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-scripts\") pod \"2f4c7973-1227-4188-8be0-766b1fdcd108\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.018970 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97j7x\" (UniqueName: \"kubernetes.io/projected/2f4c7973-1227-4188-8be0-766b1fdcd108-kube-api-access-97j7x\") pod \"2f4c7973-1227-4188-8be0-766b1fdcd108\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.019062 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-combined-ca-bundle\") pod \"2f4c7973-1227-4188-8be0-766b1fdcd108\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.019192 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-config-data\") pod \"2f4c7973-1227-4188-8be0-766b1fdcd108\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.019322 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-sg-core-conf-yaml\") pod \"2f4c7973-1227-4188-8be0-766b1fdcd108\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.019653 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-log-httpd\") pod \"2f4c7973-1227-4188-8be0-766b1fdcd108\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.020631 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-run-httpd\") pod \"2f4c7973-1227-4188-8be0-766b1fdcd108\" (UID: \"2f4c7973-1227-4188-8be0-766b1fdcd108\") " Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.021447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f4c7973-1227-4188-8be0-766b1fdcd108" (UID: "2f4c7973-1227-4188-8be0-766b1fdcd108"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.021950 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f4c7973-1227-4188-8be0-766b1fdcd108" (UID: "2f4c7973-1227-4188-8be0-766b1fdcd108"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.028010 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-scripts" (OuterVolumeSpecName: "scripts") pod "2f4c7973-1227-4188-8be0-766b1fdcd108" (UID: "2f4c7973-1227-4188-8be0-766b1fdcd108"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.033091 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4c7973-1227-4188-8be0-766b1fdcd108-kube-api-access-97j7x" (OuterVolumeSpecName: "kube-api-access-97j7x") pod "2f4c7973-1227-4188-8be0-766b1fdcd108" (UID: "2f4c7973-1227-4188-8be0-766b1fdcd108"). InnerVolumeSpecName "kube-api-access-97j7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.056776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f4c7973-1227-4188-8be0-766b1fdcd108" (UID: "2f4c7973-1227-4188-8be0-766b1fdcd108"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.118777 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f4c7973-1227-4188-8be0-766b1fdcd108" (UID: "2f4c7973-1227-4188-8be0-766b1fdcd108"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.123972 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.124001 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97j7x\" (UniqueName: \"kubernetes.io/projected/2f4c7973-1227-4188-8be0-766b1fdcd108-kube-api-access-97j7x\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.124013 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.124021 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.124031 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.124040 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f4c7973-1227-4188-8be0-766b1fdcd108-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.152098 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-config-data" (OuterVolumeSpecName: "config-data") pod "2f4c7973-1227-4188-8be0-766b1fdcd108" (UID: "2f4c7973-1227-4188-8be0-766b1fdcd108"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.225867 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4c7973-1227-4188-8be0-766b1fdcd108-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.790104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"ce31e9d4f88daca353bcd669e357d558e837731bc606de67f7fdc783191177f4"} Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.790498 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.795661 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f4c7973-1227-4188-8be0-766b1fdcd108","Type":"ContainerDied","Data":"49d991eef2f1bfaa9f99d6aed12b341f83259bed4d70c34a436b9bc80bd19200"} Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.795720 4867 scope.go:117] "RemoveContainer" containerID="26e9e451ae91f4253df1646fe4946e41b8df8c49b705fbe25320970913bf42f2" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.795841 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.834359 4867 scope.go:117] "RemoveContainer" containerID="9a215babac91b10ca8a37abe89cf9202617c9a3aa29c51b4eb2c5c6ab3de9118" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.840214 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=8.447080704 podStartE2EDuration="14.840201197s" podCreationTimestamp="2026-01-26 11:38:24 +0000 UTC" firstStartedPulling="2026-01-26 11:38:27.606720014 +0000 UTC m=+1257.305294924" lastFinishedPulling="2026-01-26 11:38:33.999840507 +0000 UTC m=+1263.698415417" observedRunningTime="2026-01-26 11:38:38.825895395 +0000 UTC m=+1268.524470305" watchObservedRunningTime="2026-01-26 11:38:38.840201197 +0000 UTC m=+1268.538776117" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.883518 4867 scope.go:117] "RemoveContainer" containerID="cde8955787e9969f60a51eeb221b5ee8c78568bd0a74b38e2138e607f21f48cd" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.896378 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.927292 4867 scope.go:117] "RemoveContainer" containerID="28b16594aebd74b931d1a3afbc9fe1493c2ecc892d0584d131a74abfb243380e" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.936998 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.950046 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:38 crc kubenswrapper[4867]: E0126 11:38:38.950642 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="sg-core" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.950736 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="sg-core" Jan 26 11:38:38 crc kubenswrapper[4867]: E0126 11:38:38.950798 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="proxy-httpd" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.950895 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="proxy-httpd" Jan 26 11:38:38 crc kubenswrapper[4867]: E0126 11:38:38.951023 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-notification-agent" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.951095 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-notification-agent" Jan 26 11:38:38 crc kubenswrapper[4867]: E0126 11:38:38.951167 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-central-agent" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.951273 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-central-agent" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.951536 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="proxy-httpd" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.951618 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="sg-core" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.951675 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-notification-agent" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.951742 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" containerName="ceilometer-central-agent" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.960823 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.963719 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.965312 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:38:38 crc kubenswrapper[4867]: I0126 11:38:38.966162 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.148947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-scripts\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.149041 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.149113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.149158 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-config-data\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.149194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-run-httpd\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.149263 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-log-httpd\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.149304 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spb44\" (UniqueName: \"kubernetes.io/projected/ecf24fde-403d-454c-800d-cf015a7fd122-kube-api-access-spb44\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.250553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-scripts\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.250889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.252004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.252188 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-config-data\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.252356 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-run-httpd\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.252495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-log-httpd\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.252632 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spb44\" (UniqueName: \"kubernetes.io/projected/ecf24fde-403d-454c-800d-cf015a7fd122-kube-api-access-spb44\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.252940 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-run-httpd\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.253261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-log-httpd\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.264933 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.265620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.268321 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-config-data\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.286349 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spb44\" (UniqueName: \"kubernetes.io/projected/ecf24fde-403d-454c-800d-cf015a7fd122-kube-api-access-spb44\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.286499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-scripts\") pod \"ceilometer-0\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " pod="openstack/ceilometer-0" Jan 26 11:38:39 crc kubenswrapper[4867]: I0126 11:38:39.585456 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.062108 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.573540 4867 scope.go:117] "RemoveContainer" containerID="ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6" Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.577530 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f4c7973-1227-4188-8be0-766b1fdcd108" path="/var/lib/kubelet/pods/2f4c7973-1227-4188-8be0-766b1fdcd108/volumes" Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.595844 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.595964 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.811710 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.819288 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerStarted","Data":"f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040"} Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.819334 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerStarted","Data":"363da85ea79f54ba7467621cbc2730380dd6dbbbab68408b13c7037093f1c449"} Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.822040 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerStarted","Data":"f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d"} Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.822236 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.825572 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerID="7588da4b61c4d8e49c5984ad02a0cfabc66fde2275c7f3311aa13d97f8d362dd" exitCode=0 Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.825611 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerDied","Data":"7588da4b61c4d8e49c5984ad02a0cfabc66fde2275c7f3311aa13d97f8d362dd"} Jan 26 11:38:40 crc kubenswrapper[4867]: I0126 11:38:40.826325 4867 scope.go:117] "RemoveContainer" containerID="7588da4b61c4d8e49c5984ad02a0cfabc66fde2275c7f3311aa13d97f8d362dd" Jan 26 11:38:42 crc kubenswrapper[4867]: I0126 11:38:42.056421 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9"} Jan 26 11:38:42 crc kubenswrapper[4867]: I0126 11:38:42.061554 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 26 11:38:43 crc kubenswrapper[4867]: I0126 11:38:43.056350 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:38:43 crc kubenswrapper[4867]: I0126 11:38:43.060782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerStarted","Data":"8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5"} Jan 26 11:38:43 crc kubenswrapper[4867]: I0126 11:38:43.060835 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerStarted","Data":"15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6"} Jan 26 11:38:45 crc kubenswrapper[4867]: I0126 11:38:45.592364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 26 11:38:45 crc kubenswrapper[4867]: I0126 11:38:45.593995 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 26 11:38:45 crc kubenswrapper[4867]: I0126 11:38:45.594099 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:38:45 crc kubenswrapper[4867]: I0126 11:38:45.618194 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Jan 26 11:38:45 crc kubenswrapper[4867]: I0126 11:38:45.620457 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Jan 26 11:38:46 crc kubenswrapper[4867]: I0126 11:38:46.103702 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 26 11:38:46 crc kubenswrapper[4867]: I0126 11:38:46.115372 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.099780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerStarted","Data":"d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6"} Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.100105 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.101925 4867 generic.go:334] "Generic (PLEG): container finished" podID="a2167905-2856-4125-81fd-a2430fe558f9" containerID="f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d" exitCode=1 Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.101997 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerDied","Data":"f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d"} Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.102034 4867 scope.go:117] "RemoveContainer" containerID="ef9186f8842dd4e425fbd0521e208fbd9f96c9e93772b40398c778d1e633ecb6" Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.102371 4867 scope.go:117] "RemoveContainer" containerID="f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d" Jan 26 11:38:47 crc kubenswrapper[4867]: E0126 11:38:47.102684 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.118735 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerID="60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9" exitCode=0 Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.118769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerDied","Data":"60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9"} Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.119682 4867 scope.go:117] "RemoveContainer" containerID="60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9" Jan 26 11:38:47 crc kubenswrapper[4867]: E0126 11:38:47.120009 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(6e49ec18-452c-47df-a0c9-ea52cdced830)\"" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.142575 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.321650384 podStartE2EDuration="9.142552501s" podCreationTimestamp="2026-01-26 11:38:38 +0000 UTC" firstStartedPulling="2026-01-26 11:38:40.070143845 +0000 UTC m=+1269.768718755" lastFinishedPulling="2026-01-26 11:38:45.891045952 +0000 UTC m=+1275.589620872" observedRunningTime="2026-01-26 11:38:47.128538726 +0000 UTC m=+1276.827113636" watchObservedRunningTime="2026-01-26 11:38:47.142552501 +0000 UTC m=+1276.841127411" Jan 26 11:38:47 crc kubenswrapper[4867]: I0126 11:38:47.179761 4867 scope.go:117] "RemoveContainer" containerID="7588da4b61c4d8e49c5984ad02a0cfabc66fde2275c7f3311aa13d97f8d362dd" Jan 26 11:38:48 crc kubenswrapper[4867]: I0126 11:38:48.027406 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:38:48 crc kubenswrapper[4867]: I0126 11:38:48.027463 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:38:48 crc kubenswrapper[4867]: I0126 11:38:48.137000 4867 scope.go:117] "RemoveContainer" containerID="f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d" Jan 26 11:38:48 crc kubenswrapper[4867]: E0126 11:38:48.137486 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:38:48 crc kubenswrapper[4867]: I0126 11:38:48.145630 4867 scope.go:117] "RemoveContainer" containerID="60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9" Jan 26 11:38:48 crc kubenswrapper[4867]: E0126 11:38:48.145884 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(6e49ec18-452c-47df-a0c9-ea52cdced830)\"" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" Jan 26 11:38:50 crc kubenswrapper[4867]: I0126 11:38:50.591959 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 26 11:38:50 crc kubenswrapper[4867]: I0126 11:38:50.592516 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:38:50 crc kubenswrapper[4867]: I0126 11:38:50.593637 4867 scope.go:117] "RemoveContainer" containerID="60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9" Jan 26 11:38:50 crc kubenswrapper[4867]: E0126 11:38:50.594102 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(6e49ec18-452c-47df-a0c9-ea52cdced830)\"" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" Jan 26 11:38:50 crc kubenswrapper[4867]: I0126 11:38:50.597018 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:38:50 crc kubenswrapper[4867]: I0126 11:38:50.597545 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:38:51 crc kubenswrapper[4867]: I0126 11:38:51.800461 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:51 crc kubenswrapper[4867]: I0126 11:38:51.801054 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-central-agent" containerID="cri-o://f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040" gracePeriod=30 Jan 26 11:38:51 crc kubenswrapper[4867]: I0126 11:38:51.801135 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="proxy-httpd" containerID="cri-o://d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6" gracePeriod=30 Jan 26 11:38:51 crc kubenswrapper[4867]: I0126 11:38:51.801134 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="sg-core" containerID="cri-o://8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5" gracePeriod=30 Jan 26 11:38:51 crc kubenswrapper[4867]: I0126 11:38:51.801121 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-notification-agent" containerID="cri-o://15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6" gracePeriod=30 Jan 26 11:38:52 crc kubenswrapper[4867]: I0126 11:38:52.193142 4867 generic.go:334] "Generic (PLEG): container finished" podID="ecf24fde-403d-454c-800d-cf015a7fd122" containerID="d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6" exitCode=0 Jan 26 11:38:52 crc kubenswrapper[4867]: I0126 11:38:52.193172 4867 generic.go:334] "Generic (PLEG): container finished" podID="ecf24fde-403d-454c-800d-cf015a7fd122" containerID="8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5" exitCode=2 Jan 26 11:38:52 crc kubenswrapper[4867]: I0126 11:38:52.193192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerDied","Data":"d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6"} Jan 26 11:38:52 crc kubenswrapper[4867]: I0126 11:38:52.193232 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerDied","Data":"8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5"} Jan 26 11:38:53 crc kubenswrapper[4867]: I0126 11:38:53.203588 4867 generic.go:334] "Generic (PLEG): container finished" podID="ecf24fde-403d-454c-800d-cf015a7fd122" containerID="15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6" exitCode=0 Jan 26 11:38:53 crc kubenswrapper[4867]: I0126 11:38:53.203644 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerDied","Data":"15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6"} Jan 26 11:38:53 crc kubenswrapper[4867]: I0126 11:38:53.206691 4867 generic.go:334] "Generic (PLEG): container finished" podID="39653949-816a-4237-91ab-e0a3cbdc1ff9" containerID="ce71285d4baf1e7a59b451bb335bb3a4518efd9973a1febff4f9e67e53860a51" exitCode=0 Jan 26 11:38:53 crc kubenswrapper[4867]: I0126 11:38:53.206730 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" event={"ID":"39653949-816a-4237-91ab-e0a3cbdc1ff9","Type":"ContainerDied","Data":"ce71285d4baf1e7a59b451bb335bb3a4518efd9973a1febff4f9e67e53860a51"} Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.606334 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.734976 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-config-data\") pod \"39653949-816a-4237-91ab-e0a3cbdc1ff9\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.735067 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-scripts\") pod \"39653949-816a-4237-91ab-e0a3cbdc1ff9\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.735127 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4whpv\" (UniqueName: \"kubernetes.io/projected/39653949-816a-4237-91ab-e0a3cbdc1ff9-kube-api-access-4whpv\") pod \"39653949-816a-4237-91ab-e0a3cbdc1ff9\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.735160 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-combined-ca-bundle\") pod \"39653949-816a-4237-91ab-e0a3cbdc1ff9\" (UID: \"39653949-816a-4237-91ab-e0a3cbdc1ff9\") " Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.740745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-scripts" (OuterVolumeSpecName: "scripts") pod "39653949-816a-4237-91ab-e0a3cbdc1ff9" (UID: "39653949-816a-4237-91ab-e0a3cbdc1ff9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.741344 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39653949-816a-4237-91ab-e0a3cbdc1ff9-kube-api-access-4whpv" (OuterVolumeSpecName: "kube-api-access-4whpv") pod "39653949-816a-4237-91ab-e0a3cbdc1ff9" (UID: "39653949-816a-4237-91ab-e0a3cbdc1ff9"). InnerVolumeSpecName "kube-api-access-4whpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.761990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-config-data" (OuterVolumeSpecName: "config-data") pod "39653949-816a-4237-91ab-e0a3cbdc1ff9" (UID: "39653949-816a-4237-91ab-e0a3cbdc1ff9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.767566 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39653949-816a-4237-91ab-e0a3cbdc1ff9" (UID: "39653949-816a-4237-91ab-e0a3cbdc1ff9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.837081 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.837123 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.837133 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4whpv\" (UniqueName: \"kubernetes.io/projected/39653949-816a-4237-91ab-e0a3cbdc1ff9-kube-api-access-4whpv\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:54 crc kubenswrapper[4867]: I0126 11:38:54.837143 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653949-816a-4237-91ab-e0a3cbdc1ff9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.223313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" event={"ID":"39653949-816a-4237-91ab-e0a3cbdc1ff9","Type":"ContainerDied","Data":"b37fafae3f697c5624c4abfa439b26e94e2eceeebcff9124f4ecc3f976120e8b"} Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.223360 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37fafae3f697c5624c4abfa439b26e94e2eceeebcff9124f4ecc3f976120e8b" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.223375 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lh5xw" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.324261 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 11:38:55 crc kubenswrapper[4867]: E0126 11:38:55.324658 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39653949-816a-4237-91ab-e0a3cbdc1ff9" containerName="nova-cell0-conductor-db-sync" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.324675 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="39653949-816a-4237-91ab-e0a3cbdc1ff9" containerName="nova-cell0-conductor-db-sync" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.324837 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="39653949-816a-4237-91ab-e0a3cbdc1ff9" containerName="nova-cell0-conductor-db-sync" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.325432 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.328123 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.336259 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sj95f" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.340275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.450664 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcrn\" (UniqueName: \"kubernetes.io/projected/8ad13a23-f9ee-40f2-aa88-3940ced23279-kube-api-access-kmcrn\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.450782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad13a23-f9ee-40f2-aa88-3940ced23279-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.450848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad13a23-f9ee-40f2-aa88-3940ced23279-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.552711 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad13a23-f9ee-40f2-aa88-3940ced23279-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.552801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad13a23-f9ee-40f2-aa88-3940ced23279-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.552901 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmcrn\" (UniqueName: \"kubernetes.io/projected/8ad13a23-f9ee-40f2-aa88-3940ced23279-kube-api-access-kmcrn\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.562475 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad13a23-f9ee-40f2-aa88-3940ced23279-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.567168 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad13a23-f9ee-40f2-aa88-3940ced23279-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.576939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmcrn\" (UniqueName: \"kubernetes.io/projected/8ad13a23-f9ee-40f2-aa88-3940ced23279-kube-api-access-kmcrn\") pod \"nova-cell0-conductor-0\" (UID: \"8ad13a23-f9ee-40f2-aa88-3940ced23279\") " pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.592331 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.593116 4867 scope.go:117] "RemoveContainer" containerID="60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9" Jan 26 11:38:55 crc kubenswrapper[4867]: E0126 11:38:55.593368 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(6e49ec18-452c-47df-a0c9-ea52cdced830)\"" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.597516 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.599351 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:38:55 crc kubenswrapper[4867]: I0126 11:38:55.643246 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:56 crc kubenswrapper[4867]: W0126 11:38:56.120179 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad13a23_f9ee_40f2_aa88_3940ced23279.slice/crio-908da91e8a80e8217c29ecfde9c28865759461f2c39c73c27d86e6cc0b1da12d WatchSource:0}: Error finding container 908da91e8a80e8217c29ecfde9c28865759461f2c39c73c27d86e6cc0b1da12d: Status 404 returned error can't find the container with id 908da91e8a80e8217c29ecfde9c28865759461f2c39c73c27d86e6cc0b1da12d Jan 26 11:38:56 crc kubenswrapper[4867]: I0126 11:38:56.122668 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 11:38:56 crc kubenswrapper[4867]: I0126 11:38:56.232512 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ad13a23-f9ee-40f2-aa88-3940ced23279","Type":"ContainerStarted","Data":"908da91e8a80e8217c29ecfde9c28865759461f2c39c73c27d86e6cc0b1da12d"} Jan 26 11:38:57 crc kubenswrapper[4867]: I0126 11:38:57.243654 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ad13a23-f9ee-40f2-aa88-3940ced23279","Type":"ContainerStarted","Data":"0d552389fadb30b375c3b884a044743fa98c8aaa2f10df108851ce9e66538246"} Jan 26 11:38:57 crc kubenswrapper[4867]: I0126 11:38:57.244175 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 11:38:57 crc kubenswrapper[4867]: I0126 11:38:57.270860 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.270833182 podStartE2EDuration="2.270833182s" podCreationTimestamp="2026-01-26 11:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:38:57.258382883 +0000 UTC m=+1286.956957793" watchObservedRunningTime="2026-01-26 11:38:57.270833182 +0000 UTC m=+1286.969408112" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.229347 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.259656 4867 generic.go:334] "Generic (PLEG): container finished" podID="ecf24fde-403d-454c-800d-cf015a7fd122" containerID="f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040" exitCode=0 Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.259715 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.259758 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerDied","Data":"f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040"} Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.259809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecf24fde-403d-454c-800d-cf015a7fd122","Type":"ContainerDied","Data":"363da85ea79f54ba7467621cbc2730380dd6dbbbab68408b13c7037093f1c449"} Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.259828 4867 scope.go:117] "RemoveContainer" containerID="d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.299152 4867 scope.go:117] "RemoveContainer" containerID="8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.300988 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-config-data\") pod \"ecf24fde-403d-454c-800d-cf015a7fd122\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.301119 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-log-httpd\") pod \"ecf24fde-403d-454c-800d-cf015a7fd122\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.301187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-scripts\") pod \"ecf24fde-403d-454c-800d-cf015a7fd122\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.301416 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spb44\" (UniqueName: \"kubernetes.io/projected/ecf24fde-403d-454c-800d-cf015a7fd122-kube-api-access-spb44\") pod \"ecf24fde-403d-454c-800d-cf015a7fd122\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.301774 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ecf24fde-403d-454c-800d-cf015a7fd122" (UID: "ecf24fde-403d-454c-800d-cf015a7fd122"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.302134 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-combined-ca-bundle\") pod \"ecf24fde-403d-454c-800d-cf015a7fd122\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.302209 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-sg-core-conf-yaml\") pod \"ecf24fde-403d-454c-800d-cf015a7fd122\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.302321 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-run-httpd\") pod \"ecf24fde-403d-454c-800d-cf015a7fd122\" (UID: \"ecf24fde-403d-454c-800d-cf015a7fd122\") " Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.303433 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.303845 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ecf24fde-403d-454c-800d-cf015a7fd122" (UID: "ecf24fde-403d-454c-800d-cf015a7fd122"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.309390 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf24fde-403d-454c-800d-cf015a7fd122-kube-api-access-spb44" (OuterVolumeSpecName: "kube-api-access-spb44") pod "ecf24fde-403d-454c-800d-cf015a7fd122" (UID: "ecf24fde-403d-454c-800d-cf015a7fd122"). InnerVolumeSpecName "kube-api-access-spb44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.315966 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-scripts" (OuterVolumeSpecName: "scripts") pod "ecf24fde-403d-454c-800d-cf015a7fd122" (UID: "ecf24fde-403d-454c-800d-cf015a7fd122"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.332765 4867 scope.go:117] "RemoveContainer" containerID="15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.353547 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ecf24fde-403d-454c-800d-cf015a7fd122" (UID: "ecf24fde-403d-454c-800d-cf015a7fd122"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.378829 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecf24fde-403d-454c-800d-cf015a7fd122" (UID: "ecf24fde-403d-454c-800d-cf015a7fd122"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.405847 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.405881 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spb44\" (UniqueName: \"kubernetes.io/projected/ecf24fde-403d-454c-800d-cf015a7fd122-kube-api-access-spb44\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.405894 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.406486 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.406506 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecf24fde-403d-454c-800d-cf015a7fd122-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.443416 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-config-data" (OuterVolumeSpecName: "config-data") pod "ecf24fde-403d-454c-800d-cf015a7fd122" (UID: "ecf24fde-403d-454c-800d-cf015a7fd122"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.473950 4867 scope.go:117] "RemoveContainer" containerID="f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.492117 4867 scope.go:117] "RemoveContainer" containerID="d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6" Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.492559 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6\": container with ID starting with d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6 not found: ID does not exist" containerID="d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.492675 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6"} err="failed to get container status \"d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6\": rpc error: code = NotFound desc = could not find container \"d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6\": container with ID starting with d19a1664eab6c70199138ba78e6a6763bf2887bdb8b12ed3ae159dfc7691b6f6 not found: ID does not exist" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.492765 4867 scope.go:117] "RemoveContainer" containerID="8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5" Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.493181 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5\": container with ID starting with 8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5 not found: ID does not exist" containerID="8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.493263 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5"} err="failed to get container status \"8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5\": rpc error: code = NotFound desc = could not find container \"8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5\": container with ID starting with 8d53d5687ef68f158817816ce6d3955665c375887ed35ef562a0202a1e0ca6c5 not found: ID does not exist" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.493290 4867 scope.go:117] "RemoveContainer" containerID="15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6" Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.493561 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6\": container with ID starting with 15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6 not found: ID does not exist" containerID="15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.493676 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6"} err="failed to get container status \"15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6\": rpc error: code = NotFound desc = could not find container \"15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6\": container with ID starting with 15d6eb9e8485c259d28d20eb3eac1dc35fb587971692a7ece1e63e912fe957b6 not found: ID does not exist" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.493777 4867 scope.go:117] "RemoveContainer" containerID="f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040" Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.494108 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040\": container with ID starting with f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040 not found: ID does not exist" containerID="f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.494132 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040"} err="failed to get container status \"f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040\": rpc error: code = NotFound desc = could not find container \"f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040\": container with ID starting with f7620acc2b3dd06a489178258f2b085c4c2c21cc0a27d47649a8dc3eb693e040 not found: ID does not exist" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.507922 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf24fde-403d-454c-800d-cf015a7fd122-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.602039 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.616005 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.628313 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.628816 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-central-agent" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.628832 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-central-agent" Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.628850 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-notification-agent" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.628859 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-notification-agent" Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.628874 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="proxy-httpd" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.628882 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="proxy-httpd" Jan 26 11:38:58 crc kubenswrapper[4867]: E0126 11:38:58.628898 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="sg-core" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.628904 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="sg-core" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.629101 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-notification-agent" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.629116 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="proxy-httpd" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.629134 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="ceilometer-central-agent" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.629149 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" containerName="sg-core" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.631286 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.633853 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.635841 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.637507 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.710447 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-config-data\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.710729 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-scripts\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.710833 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-run-httpd\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.710856 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-log-httpd\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.710873 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.710951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kq7\" (UniqueName: \"kubernetes.io/projected/f2527494-e18a-4e87-ab80-0b922ad79c65-kube-api-access-w8kq7\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.710995 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812130 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-run-httpd\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812189 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-log-httpd\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812214 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kq7\" (UniqueName: \"kubernetes.io/projected/f2527494-e18a-4e87-ab80-0b922ad79c65-kube-api-access-w8kq7\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812432 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-config-data\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-scripts\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812658 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-run-httpd\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.812842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-log-httpd\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.817315 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.818031 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-scripts\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.818151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-config-data\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.821145 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.833718 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kq7\" (UniqueName: \"kubernetes.io/projected/f2527494-e18a-4e87-ab80-0b922ad79c65-kube-api-access-w8kq7\") pod \"ceilometer-0\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " pod="openstack/ceilometer-0" Jan 26 11:38:58 crc kubenswrapper[4867]: I0126 11:38:58.950362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:38:59 crc kubenswrapper[4867]: I0126 11:38:59.410754 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:38:59 crc kubenswrapper[4867]: W0126 11:38:59.417591 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2527494_e18a_4e87_ab80_0b922ad79c65.slice/crio-1f4375d6861706b6e8448de6c735ff7c7e631b260c3ba4019af63ce5a87244a2 WatchSource:0}: Error finding container 1f4375d6861706b6e8448de6c735ff7c7e631b260c3ba4019af63ce5a87244a2: Status 404 returned error can't find the container with id 1f4375d6861706b6e8448de6c735ff7c7e631b260c3ba4019af63ce5a87244a2 Jan 26 11:38:59 crc kubenswrapper[4867]: I0126 11:38:59.563723 4867 scope.go:117] "RemoveContainer" containerID="f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d" Jan 26 11:38:59 crc kubenswrapper[4867]: E0126 11:38:59.564156 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.281261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerStarted","Data":"74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0"} Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.281778 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerStarted","Data":"1f4375d6861706b6e8448de6c735ff7c7e631b260c3ba4019af63ce5a87244a2"} Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.577472 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf24fde-403d-454c-800d-cf015a7fd122" path="/var/lib/kubelet/pods/ecf24fde-403d-454c-800d-cf015a7fd122/volumes" Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.598283 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.598517 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.599475 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ironic-inspector-httpd" containerStatusID={"Type":"cri-o","ID":"cfeea3057bae114fd13506c33a165d60eec01943572fe2a84ddd58eacd66462b"} pod="openstack/ironic-inspector-0" containerMessage="Container ironic-inspector-httpd failed liveness probe, will be restarted" Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.599586 4867 scope.go:117] "RemoveContainer" containerID="60604b0dd7ced7fa024492299a58875d07e91cd790cb7fd60c8db4ffa39ab4b9" Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.599710 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerName="ironic-inspector-httpd" containerID="cri-o://cfeea3057bae114fd13506c33a165d60eec01943572fe2a84ddd58eacd66462b" gracePeriod=60 Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.601518 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ironic-inspector-0" podUID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 11:39:00 crc kubenswrapper[4867]: I0126 11:39:00.601664 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:39:01 crc kubenswrapper[4867]: I0126 11:39:01.290686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerStarted","Data":"a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826"} Jan 26 11:39:01 crc kubenswrapper[4867]: I0126 11:39:01.294811 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e49ec18-452c-47df-a0c9-ea52cdced830" containerID="cfeea3057bae114fd13506c33a165d60eec01943572fe2a84ddd58eacd66462b" exitCode=0 Jan 26 11:39:01 crc kubenswrapper[4867]: I0126 11:39:01.294921 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerDied","Data":"cfeea3057bae114fd13506c33a165d60eec01943572fe2a84ddd58eacd66462b"} Jan 26 11:39:02 crc kubenswrapper[4867]: I0126 11:39:02.308000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"fda4311ceeaf6671df61d6de503c3432e9dac668270b755f6386e4f67e4eb6a4"} Jan 26 11:39:03 crc kubenswrapper[4867]: I0126 11:39:03.319490 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6e49ec18-452c-47df-a0c9-ea52cdced830","Type":"ContainerStarted","Data":"288e6525ec9865e05c902461da9124ca60a9ce92eefc81ae1fd83da0cc5e0ae2"} Jan 26 11:39:05 crc kubenswrapper[4867]: I0126 11:39:05.592250 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:39:05 crc kubenswrapper[4867]: I0126 11:39:05.592596 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 26 11:39:05 crc kubenswrapper[4867]: I0126 11:39:05.592610 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Jan 26 11:39:05 crc kubenswrapper[4867]: I0126 11:39:05.592622 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Jan 26 11:39:05 crc kubenswrapper[4867]: I0126 11:39:05.623363 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Jan 26 11:39:05 crc kubenswrapper[4867]: I0126 11:39:05.627058 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Jan 26 11:39:05 crc kubenswrapper[4867]: I0126 11:39:05.703202 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.203589 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-26gmr"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.204968 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.208362 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.208836 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.222565 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-26gmr"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.273834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-scripts\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.273957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.274068 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59pbk\" (UniqueName: \"kubernetes.io/projected/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-kube-api-access-59pbk\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.274320 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-config-data\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.344618 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerStarted","Data":"89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076"} Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.374351 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.375785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59pbk\" (UniqueName: \"kubernetes.io/projected/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-kube-api-access-59pbk\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.375864 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-config-data\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.375973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-scripts\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.376053 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.378533 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.386485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-scripts\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.392507 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.396712 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.397110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.405268 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.408330 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-config-data\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.417931 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.426295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59pbk\" (UniqueName: \"kubernetes.io/projected/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-kube-api-access-59pbk\") pod \"nova-cell0-cell-mapping-26gmr\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.479967 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.480084 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.482385 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zkg\" (UniqueName: \"kubernetes.io/projected/64ab41b4-2174-46ff-bb95-6c661105a1ec-kube-api-access-z9zkg\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.542932 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.544513 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.558578 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.571347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.615960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zkg\" (UniqueName: \"kubernetes.io/projected/64ab41b4-2174-46ff-bb95-6c661105a1ec-kube-api-access-z9zkg\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.664806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.720775 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.804419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.804456 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zkg\" (UniqueName: \"kubernetes.io/projected/64ab41b4-2174-46ff-bb95-6c661105a1ec-kube-api-access-z9zkg\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.806799 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.824024 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.870296 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.871522 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62cd\" (UniqueName: \"kubernetes.io/projected/cdf442b0-2477-426b-8b20-63e07a2c8251-kube-api-access-c62cd\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.871740 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdf442b0-2477-426b-8b20-63e07a2c8251-logs\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.871781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-config-data\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.871830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.959282 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.960658 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.974642 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.975524 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.975591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c62cd\" (UniqueName: \"kubernetes.io/projected/cdf442b0-2477-426b-8b20-63e07a2c8251-kube-api-access-c62cd\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.975747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdf442b0-2477-426b-8b20-63e07a2c8251-logs\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.975773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-config-data\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.976631 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdf442b0-2477-426b-8b20-63e07a2c8251-logs\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.981422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.990378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-config-data\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.992988 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62cd\" (UniqueName: \"kubernetes.io/projected/cdf442b0-2477-426b-8b20-63e07a2c8251-kube-api-access-c62cd\") pod \"nova-metadata-0\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " pod="openstack/nova-metadata-0" Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.997328 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:06 crc kubenswrapper[4867]: I0126 11:39:06.998964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.007307 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.032377 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.043840 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9s9w5"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.047172 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.077017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.077059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a67e72b7-3136-4e60-8192-fa54044ef257-logs\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.077083 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-config-data\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.077117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.077141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2wx\" (UniqueName: \"kubernetes.io/projected/a67e72b7-3136-4e60-8192-fa54044ef257-kube-api-access-4n2wx\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.077158 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-config-data\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.077182 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jqkx\" (UniqueName: \"kubernetes.io/projected/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-kube-api-access-8jqkx\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.087936 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.106107 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9s9w5"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189420 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189507 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189600 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a67e72b7-3136-4e60-8192-fa54044ef257-logs\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189658 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-config-data\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189681 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189713 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-config\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8f9\" (UniqueName: \"kubernetes.io/projected/c33aec01-bab9-4160-9f96-a290d8c67e54-kube-api-access-4n8f9\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189764 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2wx\" (UniqueName: \"kubernetes.io/projected/a67e72b7-3136-4e60-8192-fa54044ef257-kube-api-access-4n2wx\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189823 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-config-data\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.189855 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jqkx\" (UniqueName: \"kubernetes.io/projected/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-kube-api-access-8jqkx\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.194271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a67e72b7-3136-4e60-8192-fa54044ef257-logs\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.195905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.196923 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.200119 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-config-data\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.202901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-config-data\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.212762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2wx\" (UniqueName: \"kubernetes.io/projected/a67e72b7-3136-4e60-8192-fa54044ef257-kube-api-access-4n2wx\") pod \"nova-api-0\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.223106 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jqkx\" (UniqueName: \"kubernetes.io/projected/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-kube-api-access-8jqkx\") pod \"nova-scheduler-0\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.244877 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.292483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.292572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.292595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.292670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.292707 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-config\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.292725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n8f9\" (UniqueName: \"kubernetes.io/projected/c33aec01-bab9-4160-9f96-a290d8c67e54-kube-api-access-4n8f9\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.293993 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.294673 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.295012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.295135 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-config\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.295700 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.301494 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.313291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n8f9\" (UniqueName: \"kubernetes.io/projected/c33aec01-bab9-4160-9f96-a290d8c67e54-kube-api-access-4n8f9\") pod \"dnsmasq-dns-bccf8f775-9s9w5\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.326127 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.395080 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.506325 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-26gmr"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.526498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.812746 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.890297 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-226m5"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.892136 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.898947 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-226m5"] Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.904091 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.904312 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 11:39:07 crc kubenswrapper[4867]: I0126 11:39:07.991638 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:08 crc kubenswrapper[4867]: W0126 11:39:08.000698 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f5d75c_d7d5_44d9_b571_1ab86e4cf156.slice/crio-a7a0156d3f2140fe6846bf79b8e0e7f59c0edbb437f5642500309294aa708676 WatchSource:0}: Error finding container a7a0156d3f2140fe6846bf79b8e0e7f59c0edbb437f5642500309294aa708676: Status 404 returned error can't find the container with id a7a0156d3f2140fe6846bf79b8e0e7f59c0edbb437f5642500309294aa708676 Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.023764 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.023830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-scripts\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.023894 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gm84\" (UniqueName: \"kubernetes.io/projected/0da74a00-2497-4e45-9419-032e9b97c401-kube-api-access-7gm84\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.023999 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-config-data\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.125882 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.125928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-scripts\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.125975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gm84\" (UniqueName: \"kubernetes.io/projected/0da74a00-2497-4e45-9419-032e9b97c401-kube-api-access-7gm84\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.126028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-config-data\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.135271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-config-data\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.135458 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-scripts\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.136546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.149856 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gm84\" (UniqueName: \"kubernetes.io/projected/0da74a00-2497-4e45-9419-032e9b97c401-kube-api-access-7gm84\") pod \"nova-cell1-conductor-db-sync-226m5\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.231273 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9s9w5"] Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.243116 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.250735 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.394997 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-26gmr" event={"ID":"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2","Type":"ContainerStarted","Data":"2e1ab4728949f5438a802db2e99dd1a6fa44ca6007557fcbd9493f576e57105f"} Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.396834 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a67e72b7-3136-4e60-8192-fa54044ef257","Type":"ContainerStarted","Data":"01456a972841af473f52bf248107a78320439452f165dd54bd5fab124e147312"} Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.402996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" event={"ID":"c33aec01-bab9-4160-9f96-a290d8c67e54","Type":"ContainerStarted","Data":"5af51dd48506048e98e4d6b0d223b7e568f6cab053d15dedfe693dea291f9659"} Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.404618 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64ab41b4-2174-46ff-bb95-6c661105a1ec","Type":"ContainerStarted","Data":"85f0c51763eea3a8512b4049aaf3f41f4de556a17023a3d47e2662590003cfa4"} Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.405928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdf442b0-2477-426b-8b20-63e07a2c8251","Type":"ContainerStarted","Data":"8c2b6321e0daca0ffdb5be97a973d97787eddc225e6782d60a36e2265350200a"} Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.407538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156","Type":"ContainerStarted","Data":"a7a0156d3f2140fe6846bf79b8e0e7f59c0edbb437f5642500309294aa708676"} Jan 26 11:39:08 crc kubenswrapper[4867]: I0126 11:39:08.789559 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-226m5"] Jan 26 11:39:08 crc kubenswrapper[4867]: E0126 11:39:08.996652 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc33aec01_bab9_4160_9f96_a290d8c67e54.slice/crio-3ba80acc3944b4e27244b17fb59ab24e24c863594ad63bfbedf299c9d6f3a96c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.423459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-26gmr" event={"ID":"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2","Type":"ContainerStarted","Data":"d51c5079a8c58b62fc0b3fbaf13fc05462f1920787f6185f11d57cab3cac3ab6"} Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.429519 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerStarted","Data":"c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f"} Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.429659 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.436869 4867 generic.go:334] "Generic (PLEG): container finished" podID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerID="3ba80acc3944b4e27244b17fb59ab24e24c863594ad63bfbedf299c9d6f3a96c" exitCode=0 Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.437805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" event={"ID":"c33aec01-bab9-4160-9f96-a290d8c67e54","Type":"ContainerDied","Data":"3ba80acc3944b4e27244b17fb59ab24e24c863594ad63bfbedf299c9d6f3a96c"} Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.443830 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-226m5" event={"ID":"0da74a00-2497-4e45-9419-032e9b97c401","Type":"ContainerStarted","Data":"e156b077c74155b81316368291323c0695dd45981ecd192fdf74b0e23fda9bad"} Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.443872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-226m5" event={"ID":"0da74a00-2497-4e45-9419-032e9b97c401","Type":"ContainerStarted","Data":"6926eaa6ea8a269b5c96490c9245c4c2900838c2bcafd1967a73fd2168c3627c"} Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.447468 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-26gmr" podStartSLOduration=3.447444332 podStartE2EDuration="3.447444332s" podCreationTimestamp="2026-01-26 11:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:09.436533492 +0000 UTC m=+1299.135108402" watchObservedRunningTime="2026-01-26 11:39:09.447444332 +0000 UTC m=+1299.146019242" Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.476917 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.315560638 podStartE2EDuration="11.476896402s" podCreationTimestamp="2026-01-26 11:38:58 +0000 UTC" firstStartedPulling="2026-01-26 11:38:59.419210046 +0000 UTC m=+1289.117784956" lastFinishedPulling="2026-01-26 11:39:07.58054581 +0000 UTC m=+1297.279120720" observedRunningTime="2026-01-26 11:39:09.471472092 +0000 UTC m=+1299.170047022" watchObservedRunningTime="2026-01-26 11:39:09.476896402 +0000 UTC m=+1299.175471312" Jan 26 11:39:09 crc kubenswrapper[4867]: I0126 11:39:09.508007 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-226m5" podStartSLOduration=2.507989017 podStartE2EDuration="2.507989017s" podCreationTimestamp="2026-01-26 11:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:09.507513864 +0000 UTC m=+1299.206088794" watchObservedRunningTime="2026-01-26 11:39:09.507989017 +0000 UTC m=+1299.206563927" Jan 26 11:39:10 crc kubenswrapper[4867]: I0126 11:39:10.474397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" event={"ID":"c33aec01-bab9-4160-9f96-a290d8c67e54","Type":"ContainerStarted","Data":"5950f4325f46b7dfe43a0cf86c40c65704caaafa5dd9b340333c80fa44b9bbc1"} Jan 26 11:39:10 crc kubenswrapper[4867]: I0126 11:39:10.476350 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:10 crc kubenswrapper[4867]: I0126 11:39:10.546941 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:10 crc kubenswrapper[4867]: I0126 11:39:10.563288 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" podStartSLOduration=4.56326555 podStartE2EDuration="4.56326555s" podCreationTimestamp="2026-01-26 11:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:10.519237288 +0000 UTC m=+1300.217812198" watchObservedRunningTime="2026-01-26 11:39:10.56326555 +0000 UTC m=+1300.261840480" Jan 26 11:39:10 crc kubenswrapper[4867]: I0126 11:39:10.563838 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.494546 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64ab41b4-2174-46ff-bb95-6c661105a1ec","Type":"ContainerStarted","Data":"39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7"} Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.494625 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="64ab41b4-2174-46ff-bb95-6c661105a1ec" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7" gracePeriod=30 Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.498879 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdf442b0-2477-426b-8b20-63e07a2c8251","Type":"ContainerStarted","Data":"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99"} Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.498916 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-log" containerID="cri-o://148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a" gracePeriod=30 Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.498945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdf442b0-2477-426b-8b20-63e07a2c8251","Type":"ContainerStarted","Data":"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a"} Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.499017 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-metadata" containerID="cri-o://f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99" gracePeriod=30 Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.502637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156","Type":"ContainerStarted","Data":"d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6"} Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.507716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a67e72b7-3136-4e60-8192-fa54044ef257","Type":"ContainerStarted","Data":"dfe590bcf4b02e2da6b02882e5df55eb4bdc9b1d28a104bcd74ff7fc9a0d7ac0"} Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.507760 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a67e72b7-3136-4e60-8192-fa54044ef257","Type":"ContainerStarted","Data":"57569ab7887fe6a4392e9b2b963f1a723efba6f32f8de19f1b7ecbd6f24fab86"} Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.517743 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.304386005 podStartE2EDuration="6.517725651s" podCreationTimestamp="2026-01-26 11:39:06 +0000 UTC" firstStartedPulling="2026-01-26 11:39:07.5452695 +0000 UTC m=+1297.243844410" lastFinishedPulling="2026-01-26 11:39:11.758609146 +0000 UTC m=+1301.457184056" observedRunningTime="2026-01-26 11:39:12.510355398 +0000 UTC m=+1302.208930308" watchObservedRunningTime="2026-01-26 11:39:12.517725651 +0000 UTC m=+1302.216300561" Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.534303 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.598801894 podStartE2EDuration="6.534284547s" podCreationTimestamp="2026-01-26 11:39:06 +0000 UTC" firstStartedPulling="2026-01-26 11:39:07.82410777 +0000 UTC m=+1297.522682680" lastFinishedPulling="2026-01-26 11:39:11.759590423 +0000 UTC m=+1301.458165333" observedRunningTime="2026-01-26 11:39:12.525332691 +0000 UTC m=+1302.223907601" watchObservedRunningTime="2026-01-26 11:39:12.534284547 +0000 UTC m=+1302.232859457" Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.544505 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.792516553 podStartE2EDuration="6.544488577s" podCreationTimestamp="2026-01-26 11:39:06 +0000 UTC" firstStartedPulling="2026-01-26 11:39:08.007422474 +0000 UTC m=+1297.705997384" lastFinishedPulling="2026-01-26 11:39:11.759394498 +0000 UTC m=+1301.457969408" observedRunningTime="2026-01-26 11:39:12.538981995 +0000 UTC m=+1302.237556905" watchObservedRunningTime="2026-01-26 11:39:12.544488577 +0000 UTC m=+1302.243063487" Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.565028 4867 scope.go:117] "RemoveContainer" containerID="f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d" Jan 26 11:39:12 crc kubenswrapper[4867]: E0126 11:39:12.565205 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:39:12 crc kubenswrapper[4867]: I0126 11:39:12.575327 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.048621019 podStartE2EDuration="6.575308605s" podCreationTimestamp="2026-01-26 11:39:06 +0000 UTC" firstStartedPulling="2026-01-26 11:39:08.242761509 +0000 UTC m=+1297.941336419" lastFinishedPulling="2026-01-26 11:39:11.769449095 +0000 UTC m=+1301.468024005" observedRunningTime="2026-01-26 11:39:12.56819948 +0000 UTC m=+1302.266774390" watchObservedRunningTime="2026-01-26 11:39:12.575308605 +0000 UTC m=+1302.273883515" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.387906 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.434557 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-config-data\") pod \"cdf442b0-2477-426b-8b20-63e07a2c8251\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.434608 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-combined-ca-bundle\") pod \"cdf442b0-2477-426b-8b20-63e07a2c8251\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.434837 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdf442b0-2477-426b-8b20-63e07a2c8251-logs\") pod \"cdf442b0-2477-426b-8b20-63e07a2c8251\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.434876 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c62cd\" (UniqueName: \"kubernetes.io/projected/cdf442b0-2477-426b-8b20-63e07a2c8251-kube-api-access-c62cd\") pod \"cdf442b0-2477-426b-8b20-63e07a2c8251\" (UID: \"cdf442b0-2477-426b-8b20-63e07a2c8251\") " Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.435632 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdf442b0-2477-426b-8b20-63e07a2c8251-logs" (OuterVolumeSpecName: "logs") pod "cdf442b0-2477-426b-8b20-63e07a2c8251" (UID: "cdf442b0-2477-426b-8b20-63e07a2c8251"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.440417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf442b0-2477-426b-8b20-63e07a2c8251-kube-api-access-c62cd" (OuterVolumeSpecName: "kube-api-access-c62cd") pod "cdf442b0-2477-426b-8b20-63e07a2c8251" (UID: "cdf442b0-2477-426b-8b20-63e07a2c8251"). InnerVolumeSpecName "kube-api-access-c62cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.474994 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-config-data" (OuterVolumeSpecName: "config-data") pod "cdf442b0-2477-426b-8b20-63e07a2c8251" (UID: "cdf442b0-2477-426b-8b20-63e07a2c8251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.488062 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdf442b0-2477-426b-8b20-63e07a2c8251" (UID: "cdf442b0-2477-426b-8b20-63e07a2c8251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.521253 4867 generic.go:334] "Generic (PLEG): container finished" podID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerID="f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99" exitCode=0 Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.521535 4867 generic.go:334] "Generic (PLEG): container finished" podID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerID="148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a" exitCode=143 Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.522311 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdf442b0-2477-426b-8b20-63e07a2c8251","Type":"ContainerDied","Data":"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99"} Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.522355 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdf442b0-2477-426b-8b20-63e07a2c8251","Type":"ContainerDied","Data":"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a"} Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.522365 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cdf442b0-2477-426b-8b20-63e07a2c8251","Type":"ContainerDied","Data":"8c2b6321e0daca0ffdb5be97a973d97787eddc225e6782d60a36e2265350200a"} Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.522380 4867 scope.go:117] "RemoveContainer" containerID="f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.522415 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.536812 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdf442b0-2477-426b-8b20-63e07a2c8251-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.536841 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c62cd\" (UniqueName: \"kubernetes.io/projected/cdf442b0-2477-426b-8b20-63e07a2c8251-kube-api-access-c62cd\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.536851 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.536860 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf442b0-2477-426b-8b20-63e07a2c8251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.544132 4867 scope.go:117] "RemoveContainer" containerID="148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.595032 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.604406 4867 scope.go:117] "RemoveContainer" containerID="f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99" Jan 26 11:39:13 crc kubenswrapper[4867]: E0126 11:39:13.611434 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99\": container with ID starting with f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99 not found: ID does not exist" containerID="f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.611495 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99"} err="failed to get container status \"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99\": rpc error: code = NotFound desc = could not find container \"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99\": container with ID starting with f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99 not found: ID does not exist" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.611542 4867 scope.go:117] "RemoveContainer" containerID="148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a" Jan 26 11:39:13 crc kubenswrapper[4867]: E0126 11:39:13.624830 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a\": container with ID starting with 148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a not found: ID does not exist" containerID="148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.624876 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a"} err="failed to get container status \"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a\": rpc error: code = NotFound desc = could not find container \"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a\": container with ID starting with 148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a not found: ID does not exist" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.624900 4867 scope.go:117] "RemoveContainer" containerID="f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.625622 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99"} err="failed to get container status \"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99\": rpc error: code = NotFound desc = could not find container \"f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99\": container with ID starting with f6a0e0ade8dcc9e9f86693fd819789cff513f039f6ffa3077fb791202da39d99 not found: ID does not exist" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.625723 4867 scope.go:117] "RemoveContainer" containerID="148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.626117 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a"} err="failed to get container status \"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a\": rpc error: code = NotFound desc = could not find container \"148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a\": container with ID starting with 148ec6df9331dff2ab45fcb3a1b46f1cd68a58afbd1b87a771757c1122f9338a not found: ID does not exist" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.635002 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.648476 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:13 crc kubenswrapper[4867]: E0126 11:39:13.648897 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-log" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.648914 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-log" Jan 26 11:39:13 crc kubenswrapper[4867]: E0126 11:39:13.648930 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-metadata" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.648937 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-metadata" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.649129 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-metadata" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.649148 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" containerName="nova-metadata-log" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.650166 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.652948 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.653142 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.664913 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.740478 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f43a21d-70c6-46c1-8b74-452f6a5345f0-logs\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.740592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.740686 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.740728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfmt\" (UniqueName: \"kubernetes.io/projected/0f43a21d-70c6-46c1-8b74-452f6a5345f0-kube-api-access-vvfmt\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.740792 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-config-data\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.842065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.842114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvfmt\" (UniqueName: \"kubernetes.io/projected/0f43a21d-70c6-46c1-8b74-452f6a5345f0-kube-api-access-vvfmt\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.842170 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-config-data\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.842235 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f43a21d-70c6-46c1-8b74-452f6a5345f0-logs\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.842294 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.842766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f43a21d-70c6-46c1-8b74-452f6a5345f0-logs\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.846345 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-config-data\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.846746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.849804 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.860402 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvfmt\" (UniqueName: \"kubernetes.io/projected/0f43a21d-70c6-46c1-8b74-452f6a5345f0-kube-api-access-vvfmt\") pod \"nova-metadata-0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " pod="openstack/nova-metadata-0" Jan 26 11:39:13 crc kubenswrapper[4867]: I0126 11:39:13.971971 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:14 crc kubenswrapper[4867]: I0126 11:39:14.453148 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:14 crc kubenswrapper[4867]: W0126 11:39:14.461997 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f43a21d_70c6_46c1_8b74_452f6a5345f0.slice/crio-5b47605e790b9a920c19fed8fcedea4b9f49a83c4b71fea62e9a0bcd65673c8e WatchSource:0}: Error finding container 5b47605e790b9a920c19fed8fcedea4b9f49a83c4b71fea62e9a0bcd65673c8e: Status 404 returned error can't find the container with id 5b47605e790b9a920c19fed8fcedea4b9f49a83c4b71fea62e9a0bcd65673c8e Jan 26 11:39:14 crc kubenswrapper[4867]: I0126 11:39:14.531846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f43a21d-70c6-46c1-8b74-452f6a5345f0","Type":"ContainerStarted","Data":"5b47605e790b9a920c19fed8fcedea4b9f49a83c4b71fea62e9a0bcd65673c8e"} Jan 26 11:39:14 crc kubenswrapper[4867]: I0126 11:39:14.573715 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf442b0-2477-426b-8b20-63e07a2c8251" path="/var/lib/kubelet/pods/cdf442b0-2477-426b-8b20-63e07a2c8251/volumes" Jan 26 11:39:15 crc kubenswrapper[4867]: I0126 11:39:15.544781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f43a21d-70c6-46c1-8b74-452f6a5345f0","Type":"ContainerStarted","Data":"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458"} Jan 26 11:39:15 crc kubenswrapper[4867]: I0126 11:39:15.545304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f43a21d-70c6-46c1-8b74-452f6a5345f0","Type":"ContainerStarted","Data":"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0"} Jan 26 11:39:16 crc kubenswrapper[4867]: I0126 11:39:16.555827 4867 generic.go:334] "Generic (PLEG): container finished" podID="cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" containerID="d51c5079a8c58b62fc0b3fbaf13fc05462f1920787f6185f11d57cab3cac3ab6" exitCode=0 Jan 26 11:39:16 crc kubenswrapper[4867]: I0126 11:39:16.556055 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-26gmr" event={"ID":"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2","Type":"ContainerDied","Data":"d51c5079a8c58b62fc0b3fbaf13fc05462f1920787f6185f11d57cab3cac3ab6"} Jan 26 11:39:16 crc kubenswrapper[4867]: I0126 11:39:16.589641 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.589617616 podStartE2EDuration="3.589617616s" podCreationTimestamp="2026-01-26 11:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:16.57852733 +0000 UTC m=+1306.277102240" watchObservedRunningTime="2026-01-26 11:39:16.589617616 +0000 UTC m=+1306.288192536" Jan 26 11:39:16 crc kubenswrapper[4867]: I0126 11:39:16.828910 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.302211 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.302294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.327210 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.327291 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.345211 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.397456 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.452820 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-klbvt"] Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.453039 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" podUID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerName="dnsmasq-dns" containerID="cri-o://8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5" gracePeriod=10 Jan 26 11:39:17 crc kubenswrapper[4867]: I0126 11:39:17.606619 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.030035 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.119849 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59pbk\" (UniqueName: \"kubernetes.io/projected/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-kube-api-access-59pbk\") pod \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.120081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-scripts\") pod \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.120138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-combined-ca-bundle\") pod \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.120186 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-config-data\") pod \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\" (UID: \"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.134095 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-kube-api-access-59pbk" (OuterVolumeSpecName: "kube-api-access-59pbk") pod "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" (UID: "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2"). InnerVolumeSpecName "kube-api-access-59pbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.134583 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-scripts" (OuterVolumeSpecName: "scripts") pod "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" (UID: "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.164067 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" (UID: "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.182731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-config-data" (OuterVolumeSpecName: "config-data") pod "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" (UID: "cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.229054 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59pbk\" (UniqueName: \"kubernetes.io/projected/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-kube-api-access-59pbk\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.229090 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.229103 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.229114 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.399973 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.410399 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.410443 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.437752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-sb\") pod \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.437985 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-config\") pod \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.438018 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-nb\") pod \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.438087 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-svc\") pod \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.438116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-swift-storage-0\") pod \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.438248 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7cjg\" (UniqueName: \"kubernetes.io/projected/59401c77-eb6e-46f4-8b16-c57ac6f97f24-kube-api-access-j7cjg\") pod \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\" (UID: \"59401c77-eb6e-46f4-8b16-c57ac6f97f24\") " Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.450851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59401c77-eb6e-46f4-8b16-c57ac6f97f24-kube-api-access-j7cjg" (OuterVolumeSpecName: "kube-api-access-j7cjg") pod "59401c77-eb6e-46f4-8b16-c57ac6f97f24" (UID: "59401c77-eb6e-46f4-8b16-c57ac6f97f24"). InnerVolumeSpecName "kube-api-access-j7cjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.511804 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-config" (OuterVolumeSpecName: "config") pod "59401c77-eb6e-46f4-8b16-c57ac6f97f24" (UID: "59401c77-eb6e-46f4-8b16-c57ac6f97f24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.518505 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59401c77-eb6e-46f4-8b16-c57ac6f97f24" (UID: "59401c77-eb6e-46f4-8b16-c57ac6f97f24"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.529371 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59401c77-eb6e-46f4-8b16-c57ac6f97f24" (UID: "59401c77-eb6e-46f4-8b16-c57ac6f97f24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.541486 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59401c77-eb6e-46f4-8b16-c57ac6f97f24" (UID: "59401c77-eb6e-46f4-8b16-c57ac6f97f24"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.542967 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.542998 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.543011 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.543026 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.543037 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7cjg\" (UniqueName: \"kubernetes.io/projected/59401c77-eb6e-46f4-8b16-c57ac6f97f24-kube-api-access-j7cjg\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.546504 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59401c77-eb6e-46f4-8b16-c57ac6f97f24" (UID: "59401c77-eb6e-46f4-8b16-c57ac6f97f24"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.647539 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59401c77-eb6e-46f4-8b16-c57ac6f97f24-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.654984 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.656353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" event={"ID":"59401c77-eb6e-46f4-8b16-c57ac6f97f24","Type":"ContainerDied","Data":"8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5"} Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.656499 4867 scope.go:117] "RemoveContainer" containerID="8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.666332 4867 generic.go:334] "Generic (PLEG): container finished" podID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerID="8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5" exitCode=0 Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.666818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-klbvt" event={"ID":"59401c77-eb6e-46f4-8b16-c57ac6f97f24","Type":"ContainerDied","Data":"3dace83017e3661f381209d711b2739327ee0fb1738f262b973e9584f5cbe81b"} Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.671133 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-26gmr" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.671326 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-26gmr" event={"ID":"cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2","Type":"ContainerDied","Data":"2e1ab4728949f5438a802db2e99dd1a6fa44ca6007557fcbd9493f576e57105f"} Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.671434 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e1ab4728949f5438a802db2e99dd1a6fa44ca6007557fcbd9493f576e57105f" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.691966 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-klbvt"] Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.697067 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-klbvt"] Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.697440 4867 scope.go:117] "RemoveContainer" containerID="1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.718594 4867 scope.go:117] "RemoveContainer" containerID="8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5" Jan 26 11:39:18 crc kubenswrapper[4867]: E0126 11:39:18.719110 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5\": container with ID starting with 8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5 not found: ID does not exist" containerID="8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.719213 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5"} err="failed to get container status \"8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5\": rpc error: code = NotFound desc = could not find container \"8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5\": container with ID starting with 8d80ce50dd8f216d11e36ac6ec186a0aca0e9e47efa3036167e74f4c385c6ab5 not found: ID does not exist" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.719312 4867 scope.go:117] "RemoveContainer" containerID="1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5" Jan 26 11:39:18 crc kubenswrapper[4867]: E0126 11:39:18.719715 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5\": container with ID starting with 1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5 not found: ID does not exist" containerID="1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.719754 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5"} err="failed to get container status \"1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5\": rpc error: code = NotFound desc = could not find container \"1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5\": container with ID starting with 1d92768073f906f3cda4e76a3dc4ac64871e865f018109a5e600810d68bb51a5 not found: ID does not exist" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.811955 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.812168 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-log" containerID="cri-o://57569ab7887fe6a4392e9b2b963f1a723efba6f32f8de19f1b7ecbd6f24fab86" gracePeriod=30 Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.812347 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-api" containerID="cri-o://dfe590bcf4b02e2da6b02882e5df55eb4bdc9b1d28a104bcd74ff7fc9a0d7ac0" gracePeriod=30 Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.828707 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.842068 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.844504 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-log" containerID="cri-o://4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0" gracePeriod=30 Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.844584 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-metadata" containerID="cri-o://a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458" gracePeriod=30 Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.972926 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:39:18 crc kubenswrapper[4867]: I0126 11:39:18.972983 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.411064 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.460230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvfmt\" (UniqueName: \"kubernetes.io/projected/0f43a21d-70c6-46c1-8b74-452f6a5345f0-kube-api-access-vvfmt\") pod \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.460448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-combined-ca-bundle\") pod \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.460583 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f43a21d-70c6-46c1-8b74-452f6a5345f0-logs\") pod \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.460626 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-config-data\") pod \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.460685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-nova-metadata-tls-certs\") pod \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\" (UID: \"0f43a21d-70c6-46c1-8b74-452f6a5345f0\") " Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.461171 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f43a21d-70c6-46c1-8b74-452f6a5345f0-logs" (OuterVolumeSpecName: "logs") pod "0f43a21d-70c6-46c1-8b74-452f6a5345f0" (UID: "0f43a21d-70c6-46c1-8b74-452f6a5345f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.461420 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f43a21d-70c6-46c1-8b74-452f6a5345f0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.468979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f43a21d-70c6-46c1-8b74-452f6a5345f0-kube-api-access-vvfmt" (OuterVolumeSpecName: "kube-api-access-vvfmt") pod "0f43a21d-70c6-46c1-8b74-452f6a5345f0" (UID: "0f43a21d-70c6-46c1-8b74-452f6a5345f0"). InnerVolumeSpecName "kube-api-access-vvfmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.494758 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f43a21d-70c6-46c1-8b74-452f6a5345f0" (UID: "0f43a21d-70c6-46c1-8b74-452f6a5345f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.500451 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-config-data" (OuterVolumeSpecName: "config-data") pod "0f43a21d-70c6-46c1-8b74-452f6a5345f0" (UID: "0f43a21d-70c6-46c1-8b74-452f6a5345f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.541816 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0f43a21d-70c6-46c1-8b74-452f6a5345f0" (UID: "0f43a21d-70c6-46c1-8b74-452f6a5345f0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.564779 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.564835 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.564849 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvfmt\" (UniqueName: \"kubernetes.io/projected/0f43a21d-70c6-46c1-8b74-452f6a5345f0-kube-api-access-vvfmt\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.564860 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f43a21d-70c6-46c1-8b74-452f6a5345f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.683338 4867 generic.go:334] "Generic (PLEG): container finished" podID="a67e72b7-3136-4e60-8192-fa54044ef257" containerID="57569ab7887fe6a4392e9b2b963f1a723efba6f32f8de19f1b7ecbd6f24fab86" exitCode=143 Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.683417 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a67e72b7-3136-4e60-8192-fa54044ef257","Type":"ContainerDied","Data":"57569ab7887fe6a4392e9b2b963f1a723efba6f32f8de19f1b7ecbd6f24fab86"} Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.685986 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-226m5" event={"ID":"0da74a00-2497-4e45-9419-032e9b97c401","Type":"ContainerDied","Data":"e156b077c74155b81316368291323c0695dd45981ecd192fdf74b0e23fda9bad"} Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.687928 4867 generic.go:334] "Generic (PLEG): container finished" podID="0da74a00-2497-4e45-9419-032e9b97c401" containerID="e156b077c74155b81316368291323c0695dd45981ecd192fdf74b0e23fda9bad" exitCode=0 Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.691823 4867 generic.go:334] "Generic (PLEG): container finished" podID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerID="a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458" exitCode=0 Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.691846 4867 generic.go:334] "Generic (PLEG): container finished" podID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerID="4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0" exitCode=143 Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.691882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f43a21d-70c6-46c1-8b74-452f6a5345f0","Type":"ContainerDied","Data":"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458"} Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.691908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f43a21d-70c6-46c1-8b74-452f6a5345f0","Type":"ContainerDied","Data":"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0"} Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.691919 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f43a21d-70c6-46c1-8b74-452f6a5345f0","Type":"ContainerDied","Data":"5b47605e790b9a920c19fed8fcedea4b9f49a83c4b71fea62e9a0bcd65673c8e"} Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.691936 4867 scope.go:117] "RemoveContainer" containerID="a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.692073 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.698425 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" containerName="nova-scheduler-scheduler" containerID="cri-o://d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6" gracePeriod=30 Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.734161 4867 scope.go:117] "RemoveContainer" containerID="4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.738723 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.759168 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780041 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:19 crc kubenswrapper[4867]: E0126 11:39:19.780583 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-metadata" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780606 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-metadata" Jan 26 11:39:19 crc kubenswrapper[4867]: E0126 11:39:19.780632 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerName="dnsmasq-dns" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780639 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerName="dnsmasq-dns" Jan 26 11:39:19 crc kubenswrapper[4867]: E0126 11:39:19.780647 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" containerName="nova-manage" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780653 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" containerName="nova-manage" Jan 26 11:39:19 crc kubenswrapper[4867]: E0126 11:39:19.780666 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-log" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780672 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-log" Jan 26 11:39:19 crc kubenswrapper[4867]: E0126 11:39:19.780690 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerName="init" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780696 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerName="init" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780893 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" containerName="nova-manage" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780916 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" containerName="dnsmasq-dns" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780931 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-log" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.780941 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" containerName="nova-metadata-metadata" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.782040 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.790872 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.790952 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.792612 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.794440 4867 scope.go:117] "RemoveContainer" containerID="a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458" Jan 26 11:39:19 crc kubenswrapper[4867]: E0126 11:39:19.795590 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458\": container with ID starting with a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458 not found: ID does not exist" containerID="a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.795786 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458"} err="failed to get container status \"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458\": rpc error: code = NotFound desc = could not find container \"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458\": container with ID starting with a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458 not found: ID does not exist" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.795908 4867 scope.go:117] "RemoveContainer" containerID="4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0" Jan 26 11:39:19 crc kubenswrapper[4867]: E0126 11:39:19.797484 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0\": container with ID starting with 4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0 not found: ID does not exist" containerID="4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.797624 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0"} err="failed to get container status \"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0\": rpc error: code = NotFound desc = could not find container \"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0\": container with ID starting with 4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0 not found: ID does not exist" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.797726 4867 scope.go:117] "RemoveContainer" containerID="a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.800044 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458"} err="failed to get container status \"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458\": rpc error: code = NotFound desc = could not find container \"a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458\": container with ID starting with a783365f7f23d9cef8183faf01e2de663fc1606d008134e648b021c47c05f458 not found: ID does not exist" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.800081 4867 scope.go:117] "RemoveContainer" containerID="4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.801378 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0"} err="failed to get container status \"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0\": rpc error: code = NotFound desc = could not find container \"4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0\": container with ID starting with 4f6de5cfb0b7cff60fa187ca0e9c635479173148e87ff7f93d2ba1b16ff0f6a0 not found: ID does not exist" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.871597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.871648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e4436b8-df58-4b0b-9713-75976c443930-logs\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.871694 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2vd9\" (UniqueName: \"kubernetes.io/projected/5e4436b8-df58-4b0b-9713-75976c443930-kube-api-access-b2vd9\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.871756 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-config-data\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.871788 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.973436 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-config-data\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.973502 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.973590 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.973614 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e4436b8-df58-4b0b-9713-75976c443930-logs\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.973656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2vd9\" (UniqueName: \"kubernetes.io/projected/5e4436b8-df58-4b0b-9713-75976c443930-kube-api-access-b2vd9\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.974491 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e4436b8-df58-4b0b-9713-75976c443930-logs\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.978764 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.978930 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.979239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-config-data\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:19 crc kubenswrapper[4867]: I0126 11:39:19.991601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2vd9\" (UniqueName: \"kubernetes.io/projected/5e4436b8-df58-4b0b-9713-75976c443930-kube-api-access-b2vd9\") pod \"nova-metadata-0\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " pod="openstack/nova-metadata-0" Jan 26 11:39:20 crc kubenswrapper[4867]: I0126 11:39:20.111363 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:39:20 crc kubenswrapper[4867]: I0126 11:39:20.578907 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f43a21d-70c6-46c1-8b74-452f6a5345f0" path="/var/lib/kubelet/pods/0f43a21d-70c6-46c1-8b74-452f6a5345f0/volumes" Jan 26 11:39:20 crc kubenswrapper[4867]: I0126 11:39:20.580642 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59401c77-eb6e-46f4-8b16-c57ac6f97f24" path="/var/lib/kubelet/pods/59401c77-eb6e-46f4-8b16-c57ac6f97f24/volumes" Jan 26 11:39:20 crc kubenswrapper[4867]: I0126 11:39:20.602857 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:39:20 crc kubenswrapper[4867]: I0126 11:39:20.716590 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e4436b8-df58-4b0b-9713-75976c443930","Type":"ContainerStarted","Data":"f5e96adbfdc91fee4eaae52f148789d573abdea3c6dae621c5097cf3ac3ad514"} Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.104286 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.209056 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-scripts\") pod \"0da74a00-2497-4e45-9419-032e9b97c401\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.209680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-combined-ca-bundle\") pod \"0da74a00-2497-4e45-9419-032e9b97c401\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.209705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gm84\" (UniqueName: \"kubernetes.io/projected/0da74a00-2497-4e45-9419-032e9b97c401-kube-api-access-7gm84\") pod \"0da74a00-2497-4e45-9419-032e9b97c401\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.209729 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-config-data\") pod \"0da74a00-2497-4e45-9419-032e9b97c401\" (UID: \"0da74a00-2497-4e45-9419-032e9b97c401\") " Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.215260 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da74a00-2497-4e45-9419-032e9b97c401-kube-api-access-7gm84" (OuterVolumeSpecName: "kube-api-access-7gm84") pod "0da74a00-2497-4e45-9419-032e9b97c401" (UID: "0da74a00-2497-4e45-9419-032e9b97c401"). InnerVolumeSpecName "kube-api-access-7gm84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.215368 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-scripts" (OuterVolumeSpecName: "scripts") pod "0da74a00-2497-4e45-9419-032e9b97c401" (UID: "0da74a00-2497-4e45-9419-032e9b97c401"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.244630 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da74a00-2497-4e45-9419-032e9b97c401" (UID: "0da74a00-2497-4e45-9419-032e9b97c401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.249682 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-config-data" (OuterVolumeSpecName: "config-data") pod "0da74a00-2497-4e45-9419-032e9b97c401" (UID: "0da74a00-2497-4e45-9419-032e9b97c401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.313002 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.313414 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.313505 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gm84\" (UniqueName: \"kubernetes.io/projected/0da74a00-2497-4e45-9419-032e9b97c401-kube-api-access-7gm84\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.313572 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da74a00-2497-4e45-9419-032e9b97c401-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.727978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e4436b8-df58-4b0b-9713-75976c443930","Type":"ContainerStarted","Data":"e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb"} Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.728023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e4436b8-df58-4b0b-9713-75976c443930","Type":"ContainerStarted","Data":"c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e"} Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.734652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-226m5" event={"ID":"0da74a00-2497-4e45-9419-032e9b97c401","Type":"ContainerDied","Data":"6926eaa6ea8a269b5c96490c9245c4c2900838c2bcafd1967a73fd2168c3627c"} Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.734736 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6926eaa6ea8a269b5c96490c9245c4c2900838c2bcafd1967a73fd2168c3627c" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.734842 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-226m5" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.803085 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.803063257 podStartE2EDuration="2.803063257s" podCreationTimestamp="2026-01-26 11:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:21.776595279 +0000 UTC m=+1311.475170189" watchObservedRunningTime="2026-01-26 11:39:21.803063257 +0000 UTC m=+1311.501638167" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.809511 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 11:39:21 crc kubenswrapper[4867]: E0126 11:39:21.809961 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da74a00-2497-4e45-9419-032e9b97c401" containerName="nova-cell1-conductor-db-sync" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.809982 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da74a00-2497-4e45-9419-032e9b97c401" containerName="nova-cell1-conductor-db-sync" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.810184 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da74a00-2497-4e45-9419-032e9b97c401" containerName="nova-cell1-conductor-db-sync" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.810865 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.814881 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.818884 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.926444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5b9\" (UniqueName: \"kubernetes.io/projected/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-kube-api-access-ht5b9\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.926948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:21 crc kubenswrapper[4867]: I0126 11:39:21.927083 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.029266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht5b9\" (UniqueName: \"kubernetes.io/projected/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-kube-api-access-ht5b9\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.029358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.029403 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.037899 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.038070 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.046679 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht5b9\" (UniqueName: \"kubernetes.io/projected/20519d4e-b9eb-43b2-b2fb-ac40a9bea288-kube-api-access-ht5b9\") pod \"nova-cell1-conductor-0\" (UID: \"20519d4e-b9eb-43b2-b2fb-ac40a9bea288\") " pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.131356 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:22 crc kubenswrapper[4867]: E0126 11:39:22.307183 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 11:39:22 crc kubenswrapper[4867]: E0126 11:39:22.311487 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 11:39:22 crc kubenswrapper[4867]: E0126 11:39:22.313488 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 11:39:22 crc kubenswrapper[4867]: E0126 11:39:22.313534 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" containerName="nova-scheduler-scheduler" Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.594857 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 11:39:22 crc kubenswrapper[4867]: I0126 11:39:22.746519 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"20519d4e-b9eb-43b2-b2fb-ac40a9bea288","Type":"ContainerStarted","Data":"3e185eb803b349a43e17e13adcdba61de899fc6e8c2a388082b67018a44cc212"} Jan 26 11:39:23 crc kubenswrapper[4867]: I0126 11:39:23.757286 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"20519d4e-b9eb-43b2-b2fb-ac40a9bea288","Type":"ContainerStarted","Data":"9b02ea5c0417306459c215a6345fd3c0cc667a60a9c8d80523bcb50af7711109"} Jan 26 11:39:23 crc kubenswrapper[4867]: I0126 11:39:23.758746 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:23 crc kubenswrapper[4867]: I0126 11:39:23.777011 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.776990494 podStartE2EDuration="2.776990494s" podCreationTimestamp="2026-01-26 11:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:23.769507238 +0000 UTC m=+1313.468082148" watchObservedRunningTime="2026-01-26 11:39:23.776990494 +0000 UTC m=+1313.475565404" Jan 26 11:39:24 crc kubenswrapper[4867]: I0126 11:39:24.563894 4867 scope.go:117] "RemoveContainer" containerID="f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d" Jan 26 11:39:24 crc kubenswrapper[4867]: E0126 11:39:24.564517 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-795fb7c76b-9ndwh_openstack(a2167905-2856-4125-81fd-a2430fe558f9)\"" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" podUID="a2167905-2856-4125-81fd-a2430fe558f9" Jan 26 11:39:24 crc kubenswrapper[4867]: I0126 11:39:24.766291 4867 generic.go:334] "Generic (PLEG): container finished" podID="e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" containerID="d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6" exitCode=0 Jan 26 11:39:24 crc kubenswrapper[4867]: I0126 11:39:24.766379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156","Type":"ContainerDied","Data":"d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6"} Jan 26 11:39:25 crc kubenswrapper[4867]: I0126 11:39:25.111507 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:39:25 crc kubenswrapper[4867]: I0126 11:39:25.111573 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:39:25 crc kubenswrapper[4867]: I0126 11:39:25.779533 4867 generic.go:334] "Generic (PLEG): container finished" podID="a67e72b7-3136-4e60-8192-fa54044ef257" containerID="dfe590bcf4b02e2da6b02882e5df55eb4bdc9b1d28a104bcd74ff7fc9a0d7ac0" exitCode=0 Jan 26 11:39:25 crc kubenswrapper[4867]: I0126 11:39:25.779604 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a67e72b7-3136-4e60-8192-fa54044ef257","Type":"ContainerDied","Data":"dfe590bcf4b02e2da6b02882e5df55eb4bdc9b1d28a104bcd74ff7fc9a0d7ac0"} Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.076826 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.121864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-config-data\") pod \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.121988 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jqkx\" (UniqueName: \"kubernetes.io/projected/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-kube-api-access-8jqkx\") pod \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.122045 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-combined-ca-bundle\") pod \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\" (UID: \"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156\") " Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.128461 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-kube-api-access-8jqkx" (OuterVolumeSpecName: "kube-api-access-8jqkx") pod "e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" (UID: "e7f5d75c-d7d5-44d9-b571-1ab86e4cf156"). InnerVolumeSpecName "kube-api-access-8jqkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.157552 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-config-data" (OuterVolumeSpecName: "config-data") pod "e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" (UID: "e7f5d75c-d7d5-44d9-b571-1ab86e4cf156"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.170687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" (UID: "e7f5d75c-d7d5-44d9-b571-1ab86e4cf156"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.224265 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.224615 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jqkx\" (UniqueName: \"kubernetes.io/projected/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-kube-api-access-8jqkx\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.224629 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.477300 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.531290 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-combined-ca-bundle\") pod \"a67e72b7-3136-4e60-8192-fa54044ef257\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.531375 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-config-data\") pod \"a67e72b7-3136-4e60-8192-fa54044ef257\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.531462 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a67e72b7-3136-4e60-8192-fa54044ef257-logs\") pod \"a67e72b7-3136-4e60-8192-fa54044ef257\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.531557 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2wx\" (UniqueName: \"kubernetes.io/projected/a67e72b7-3136-4e60-8192-fa54044ef257-kube-api-access-4n2wx\") pod \"a67e72b7-3136-4e60-8192-fa54044ef257\" (UID: \"a67e72b7-3136-4e60-8192-fa54044ef257\") " Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.532820 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a67e72b7-3136-4e60-8192-fa54044ef257-logs" (OuterVolumeSpecName: "logs") pod "a67e72b7-3136-4e60-8192-fa54044ef257" (UID: "a67e72b7-3136-4e60-8192-fa54044ef257"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.537788 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67e72b7-3136-4e60-8192-fa54044ef257-kube-api-access-4n2wx" (OuterVolumeSpecName: "kube-api-access-4n2wx") pod "a67e72b7-3136-4e60-8192-fa54044ef257" (UID: "a67e72b7-3136-4e60-8192-fa54044ef257"). InnerVolumeSpecName "kube-api-access-4n2wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.564412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67e72b7-3136-4e60-8192-fa54044ef257" (UID: "a67e72b7-3136-4e60-8192-fa54044ef257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.565775 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-config-data" (OuterVolumeSpecName: "config-data") pod "a67e72b7-3136-4e60-8192-fa54044ef257" (UID: "a67e72b7-3136-4e60-8192-fa54044ef257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.634513 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.634559 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67e72b7-3136-4e60-8192-fa54044ef257-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.634572 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a67e72b7-3136-4e60-8192-fa54044ef257-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.634584 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2wx\" (UniqueName: \"kubernetes.io/projected/a67e72b7-3136-4e60-8192-fa54044ef257-kube-api-access-4n2wx\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.789767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a67e72b7-3136-4e60-8192-fa54044ef257","Type":"ContainerDied","Data":"01456a972841af473f52bf248107a78320439452f165dd54bd5fab124e147312"} Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.789829 4867 scope.go:117] "RemoveContainer" containerID="dfe590bcf4b02e2da6b02882e5df55eb4bdc9b1d28a104bcd74ff7fc9a0d7ac0" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.789959 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.794643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7f5d75c-d7d5-44d9-b571-1ab86e4cf156","Type":"ContainerDied","Data":"a7a0156d3f2140fe6846bf79b8e0e7f59c0edbb437f5642500309294aa708676"} Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.794733 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.819484 4867 scope.go:117] "RemoveContainer" containerID="57569ab7887fe6a4392e9b2b963f1a723efba6f32f8de19f1b7ecbd6f24fab86" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.821968 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.833319 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.856121 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.868704 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.875411 4867 scope.go:117] "RemoveContainer" containerID="d2287295136814e314cfb6d9db6bc18d7fb44b21c8b0577f5b02f3d2ad6749b6" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.899643 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:26 crc kubenswrapper[4867]: E0126 11:39:26.900342 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-api" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.900432 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-api" Jan 26 11:39:26 crc kubenswrapper[4867]: E0126 11:39:26.900542 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-log" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.900604 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-log" Jan 26 11:39:26 crc kubenswrapper[4867]: E0126 11:39:26.900685 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" containerName="nova-scheduler-scheduler" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.900758 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" containerName="nova-scheduler-scheduler" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.901060 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" containerName="nova-scheduler-scheduler" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.901152 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-log" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.901253 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" containerName="nova-api-api" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.902077 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.905738 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.914526 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.916290 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.937722 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.942782 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:26 crc kubenswrapper[4867]: I0126 11:39:26.954885 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.041131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdhv\" (UniqueName: \"kubernetes.io/projected/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-kube-api-access-4tdhv\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.041169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-config-data\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.041438 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-config-data\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.041568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-logs\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.041616 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.041724 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.041763 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwl6\" (UniqueName: \"kubernetes.io/projected/ab1d2459-a54a-4e06-a4e7-ff675b803bca-kube-api-access-8fwl6\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.143355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdhv\" (UniqueName: \"kubernetes.io/projected/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-kube-api-access-4tdhv\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.143399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-config-data\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.143470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-config-data\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.143501 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-logs\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.143517 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.143573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.143594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwl6\" (UniqueName: \"kubernetes.io/projected/ab1d2459-a54a-4e06-a4e7-ff675b803bca-kube-api-access-8fwl6\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.144243 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-logs\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.149568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-config-data\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.155812 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-config-data\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.155859 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.156125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.162995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tdhv\" (UniqueName: \"kubernetes.io/projected/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-kube-api-access-4tdhv\") pod \"nova-api-0\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.163840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwl6\" (UniqueName: \"kubernetes.io/projected/ab1d2459-a54a-4e06-a4e7-ff675b803bca-kube-api-access-8fwl6\") pod \"nova-scheduler-0\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.182719 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.235966 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.245807 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.769604 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.805996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebbccea9-6788-4c17-b9f9-776d3e41b6f5","Type":"ContainerStarted","Data":"19b24ea3a758bfef4ee0b1374001bcfe686c8a1eabeeae923b0873c346954690"} Jan 26 11:39:27 crc kubenswrapper[4867]: I0126 11:39:27.893825 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:39:27 crc kubenswrapper[4867]: W0126 11:39:27.896206 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab1d2459_a54a_4e06_a4e7_ff675b803bca.slice/crio-705c861d9f414811a3249df9540ff8151ec558745e0c477c5f08eee4edba3a95 WatchSource:0}: Error finding container 705c861d9f414811a3249df9540ff8151ec558745e0c477c5f08eee4edba3a95: Status 404 returned error can't find the container with id 705c861d9f414811a3249df9540ff8151ec558745e0c477c5f08eee4edba3a95 Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.576602 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67e72b7-3136-4e60-8192-fa54044ef257" path="/var/lib/kubelet/pods/a67e72b7-3136-4e60-8192-fa54044ef257/volumes" Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.577649 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f5d75c-d7d5-44d9-b571-1ab86e4cf156" path="/var/lib/kubelet/pods/e7f5d75c-d7d5-44d9-b571-1ab86e4cf156/volumes" Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.819446 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebbccea9-6788-4c17-b9f9-776d3e41b6f5","Type":"ContainerStarted","Data":"6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f"} Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.819495 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebbccea9-6788-4c17-b9f9-776d3e41b6f5","Type":"ContainerStarted","Data":"b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f"} Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.821141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab1d2459-a54a-4e06-a4e7-ff675b803bca","Type":"ContainerStarted","Data":"c3a8d188db5c5d65fc2dab522bda7fbf3015f1a39c0d63a844d0586cc2b11253"} Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.821172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab1d2459-a54a-4e06-a4e7-ff675b803bca","Type":"ContainerStarted","Data":"705c861d9f414811a3249df9540ff8151ec558745e0c477c5f08eee4edba3a95"} Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.842302 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.842286979 podStartE2EDuration="2.842286979s" podCreationTimestamp="2026-01-26 11:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:28.84083332 +0000 UTC m=+1318.539408230" watchObservedRunningTime="2026-01-26 11:39:28.842286979 +0000 UTC m=+1318.540861889" Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.867755 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.867735339 podStartE2EDuration="2.867735339s" podCreationTimestamp="2026-01-26 11:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:28.855042921 +0000 UTC m=+1318.553617831" watchObservedRunningTime="2026-01-26 11:39:28.867735339 +0000 UTC m=+1318.566310259" Jan 26 11:39:28 crc kubenswrapper[4867]: I0126 11:39:28.957969 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 11:39:30 crc kubenswrapper[4867]: I0126 11:39:30.112018 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:39:30 crc kubenswrapper[4867]: I0126 11:39:30.112456 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:39:31 crc kubenswrapper[4867]: I0126 11:39:31.128549 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:39:31 crc kubenswrapper[4867]: I0126 11:39:31.128594 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:39:32 crc kubenswrapper[4867]: I0126 11:39:32.237096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 11:39:32 crc kubenswrapper[4867]: I0126 11:39:32.794114 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:39:32 crc kubenswrapper[4867]: I0126 11:39:32.794336 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f08d1721-01b8-4573-8446-18ae794fb9e7" containerName="kube-state-metrics" containerID="cri-o://32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803" gracePeriod=30 Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.386815 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.470884 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsjph\" (UniqueName: \"kubernetes.io/projected/f08d1721-01b8-4573-8446-18ae794fb9e7-kube-api-access-tsjph\") pod \"f08d1721-01b8-4573-8446-18ae794fb9e7\" (UID: \"f08d1721-01b8-4573-8446-18ae794fb9e7\") " Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.487471 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08d1721-01b8-4573-8446-18ae794fb9e7-kube-api-access-tsjph" (OuterVolumeSpecName: "kube-api-access-tsjph") pod "f08d1721-01b8-4573-8446-18ae794fb9e7" (UID: "f08d1721-01b8-4573-8446-18ae794fb9e7"). InnerVolumeSpecName "kube-api-access-tsjph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.574158 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsjph\" (UniqueName: \"kubernetes.io/projected/f08d1721-01b8-4573-8446-18ae794fb9e7-kube-api-access-tsjph\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.863472 4867 generic.go:334] "Generic (PLEG): container finished" podID="f08d1721-01b8-4573-8446-18ae794fb9e7" containerID="32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803" exitCode=2 Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.863524 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.863578 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f08d1721-01b8-4573-8446-18ae794fb9e7","Type":"ContainerDied","Data":"32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803"} Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.863605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f08d1721-01b8-4573-8446-18ae794fb9e7","Type":"ContainerDied","Data":"8ff218a158ea8a439be43553bdf0f1b067bc09aae4d59b9731655f2bcd977df1"} Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.863622 4867 scope.go:117] "RemoveContainer" containerID="32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803" Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.866986 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a985fff-3d59-40fa-9cae-fd0f2cc9de70" containerID="37bf71d4a517ad0db7d30f38d0886922114a28c3628b12b2538206039f6ba59b" exitCode=0 Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.867030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerDied","Data":"37bf71d4a517ad0db7d30f38d0886922114a28c3628b12b2538206039f6ba59b"} Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.889694 4867 scope.go:117] "RemoveContainer" containerID="32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803" Jan 26 11:39:33 crc kubenswrapper[4867]: E0126 11:39:33.890132 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803\": container with ID starting with 32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803 not found: ID does not exist" containerID="32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803" Jan 26 11:39:33 crc kubenswrapper[4867]: I0126 11:39:33.890172 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803"} err="failed to get container status \"32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803\": rpc error: code = NotFound desc = could not find container \"32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803\": container with ID starting with 32461a4607b60801c095d5abcb8049f936f24d1ec441cb638ec8cda08f73d803 not found: ID does not exist" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.029514 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.043630 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.055613 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:39:34 crc kubenswrapper[4867]: E0126 11:39:34.056470 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08d1721-01b8-4573-8446-18ae794fb9e7" containerName="kube-state-metrics" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.056501 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08d1721-01b8-4573-8446-18ae794fb9e7" containerName="kube-state-metrics" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.056759 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08d1721-01b8-4573-8446-18ae794fb9e7" containerName="kube-state-metrics" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.057647 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.059438 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.060850 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.067680 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.082353 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.082646 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.082831 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.082948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntpx\" (UniqueName: \"kubernetes.io/projected/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-api-access-vntpx\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.184153 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.184267 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.184323 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.184364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntpx\" (UniqueName: \"kubernetes.io/projected/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-api-access-vntpx\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.190083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.190422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.197988 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.201745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntpx\" (UniqueName: \"kubernetes.io/projected/cd1d027e-98b3-4c45-981e-a60ad4cb8748-kube-api-access-vntpx\") pod \"kube-state-metrics-0\" (UID: \"cd1d027e-98b3-4c45-981e-a60ad4cb8748\") " pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.416817 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.589282 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08d1721-01b8-4573-8446-18ae794fb9e7" path="/var/lib/kubelet/pods/f08d1721-01b8-4573-8446-18ae794fb9e7/volumes" Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.882092 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerStarted","Data":"2c4afcc96b91296eb55acc3fd50894f577c34863ca7971da4f15855ab2f92a4d"} Jan 26 11:39:34 crc kubenswrapper[4867]: I0126 11:39:34.922844 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 11:39:34 crc kubenswrapper[4867]: W0126 11:39:34.964537 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd1d027e_98b3_4c45_981e_a60ad4cb8748.slice/crio-03acca181e64ca7b5a9337ffc44ac314b8d0b71540be98ee257490f86bfcb376 WatchSource:0}: Error finding container 03acca181e64ca7b5a9337ffc44ac314b8d0b71540be98ee257490f86bfcb376: Status 404 returned error can't find the container with id 03acca181e64ca7b5a9337ffc44ac314b8d0b71540be98ee257490f86bfcb376 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.173896 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.174234 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-central-agent" containerID="cri-o://74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0" gracePeriod=30 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.174370 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="proxy-httpd" containerID="cri-o://c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f" gracePeriod=30 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.174404 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="sg-core" containerID="cri-o://89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076" gracePeriod=30 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.174433 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-notification-agent" containerID="cri-o://a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826" gracePeriod=30 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.897854 4867 generic.go:334] "Generic (PLEG): container finished" podID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerID="c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f" exitCode=0 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.898409 4867 generic.go:334] "Generic (PLEG): container finished" podID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerID="89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076" exitCode=2 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.898419 4867 generic.go:334] "Generic (PLEG): container finished" podID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerID="74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0" exitCode=0 Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.897943 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerDied","Data":"c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f"} Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.898478 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerDied","Data":"89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076"} Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.898508 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerDied","Data":"74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0"} Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.900273 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd1d027e-98b3-4c45-981e-a60ad4cb8748","Type":"ContainerStarted","Data":"248b074c3c8d2dcf864548233b2be7ed791d56ed1bb7af9f6a0b13c60fb2dc4d"} Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.900315 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cd1d027e-98b3-4c45-981e-a60ad4cb8748","Type":"ContainerStarted","Data":"03acca181e64ca7b5a9337ffc44ac314b8d0b71540be98ee257490f86bfcb376"} Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.900392 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.903758 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerStarted","Data":"5f0df369a077822172c49f81676c64771d968687ebe6097fbf9f89320794ef60"} Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.903789 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"1a985fff-3d59-40fa-9cae-fd0f2cc9de70","Type":"ContainerStarted","Data":"392a4c99a21d5f21743a8fe34c90a884bc0615859f9f795ae7ec7511ef9680dd"} Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.904159 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.931456 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.546341379 podStartE2EDuration="1.931439104s" podCreationTimestamp="2026-01-26 11:39:34 +0000 UTC" firstStartedPulling="2026-01-26 11:39:34.977016105 +0000 UTC m=+1324.675591015" lastFinishedPulling="2026-01-26 11:39:35.36211383 +0000 UTC m=+1325.060688740" observedRunningTime="2026-01-26 11:39:35.92112916 +0000 UTC m=+1325.619704070" watchObservedRunningTime="2026-01-26 11:39:35.931439104 +0000 UTC m=+1325.630014014" Jan 26 11:39:35 crc kubenswrapper[4867]: I0126 11:39:35.963776 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=73.493627717 podStartE2EDuration="2m18.963753602s" podCreationTimestamp="2026-01-26 11:37:17 +0000 UTC" firstStartedPulling="2026-01-26 11:37:28.530281605 +0000 UTC m=+1198.228856515" lastFinishedPulling="2026-01-26 11:38:34.00040747 +0000 UTC m=+1263.698982400" observedRunningTime="2026-01-26 11:39:35.951919207 +0000 UTC m=+1325.650494127" watchObservedRunningTime="2026-01-26 11:39:35.963753602 +0000 UTC m=+1325.662328522" Jan 26 11:39:36 crc kubenswrapper[4867]: I0126 11:39:36.293702 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:39:36 crc kubenswrapper[4867]: I0126 11:39:36.293764 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:39:37 crc kubenswrapper[4867]: I0126 11:39:37.043426 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Jan 26 11:39:37 crc kubenswrapper[4867]: I0126 11:39:37.236245 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 11:39:37 crc kubenswrapper[4867]: I0126 11:39:37.247305 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:39:37 crc kubenswrapper[4867]: I0126 11:39:37.247357 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:39:37 crc kubenswrapper[4867]: I0126 11:39:37.264701 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 11:39:37 crc kubenswrapper[4867]: I0126 11:39:37.972383 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 11:39:38 crc kubenswrapper[4867]: I0126 11:39:38.330431 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:39:38 crc kubenswrapper[4867]: I0126 11:39:38.330479 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 11:39:38 crc kubenswrapper[4867]: I0126 11:39:38.354784 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Jan 26 11:39:38 crc kubenswrapper[4867]: I0126 11:39:38.564036 4867 scope.go:117] "RemoveContainer" containerID="f6256bd71627a09be606483dad246bfdbe5d419a8e586a59a182396bb6d1f10d" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.439763 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.601705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-scripts\") pod \"f2527494-e18a-4e87-ab80-0b922ad79c65\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.601772 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-sg-core-conf-yaml\") pod \"f2527494-e18a-4e87-ab80-0b922ad79c65\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.601871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8kq7\" (UniqueName: \"kubernetes.io/projected/f2527494-e18a-4e87-ab80-0b922ad79c65-kube-api-access-w8kq7\") pod \"f2527494-e18a-4e87-ab80-0b922ad79c65\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.601907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-log-httpd\") pod \"f2527494-e18a-4e87-ab80-0b922ad79c65\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.602480 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-run-httpd\") pod \"f2527494-e18a-4e87-ab80-0b922ad79c65\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.602522 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-combined-ca-bundle\") pod \"f2527494-e18a-4e87-ab80-0b922ad79c65\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.602606 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-config-data\") pod \"f2527494-e18a-4e87-ab80-0b922ad79c65\" (UID: \"f2527494-e18a-4e87-ab80-0b922ad79c65\") " Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.603455 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2527494-e18a-4e87-ab80-0b922ad79c65" (UID: "f2527494-e18a-4e87-ab80-0b922ad79c65"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.604278 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2527494-e18a-4e87-ab80-0b922ad79c65" (UID: "f2527494-e18a-4e87-ab80-0b922ad79c65"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.620400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-scripts" (OuterVolumeSpecName: "scripts") pod "f2527494-e18a-4e87-ab80-0b922ad79c65" (UID: "f2527494-e18a-4e87-ab80-0b922ad79c65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.625133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2527494-e18a-4e87-ab80-0b922ad79c65-kube-api-access-w8kq7" (OuterVolumeSpecName: "kube-api-access-w8kq7") pod "f2527494-e18a-4e87-ab80-0b922ad79c65" (UID: "f2527494-e18a-4e87-ab80-0b922ad79c65"). InnerVolumeSpecName "kube-api-access-w8kq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.631923 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2527494-e18a-4e87-ab80-0b922ad79c65" (UID: "f2527494-e18a-4e87-ab80-0b922ad79c65"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.719905 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.719941 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8kq7\" (UniqueName: \"kubernetes.io/projected/f2527494-e18a-4e87-ab80-0b922ad79c65-kube-api-access-w8kq7\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.719952 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.719962 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2527494-e18a-4e87-ab80-0b922ad79c65-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.719971 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.786047 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-config-data" (OuterVolumeSpecName: "config-data") pod "f2527494-e18a-4e87-ab80-0b922ad79c65" (UID: "f2527494-e18a-4e87-ab80-0b922ad79c65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.806946 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2527494-e18a-4e87-ab80-0b922ad79c65" (UID: "f2527494-e18a-4e87-ab80-0b922ad79c65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.821197 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.821243 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2527494-e18a-4e87-ab80-0b922ad79c65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.942174 4867 generic.go:334] "Generic (PLEG): container finished" podID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerID="a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826" exitCode=0 Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.942262 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.942262 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerDied","Data":"a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826"} Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.942390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2527494-e18a-4e87-ab80-0b922ad79c65","Type":"ContainerDied","Data":"1f4375d6861706b6e8448de6c735ff7c7e631b260c3ba4019af63ce5a87244a2"} Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.942409 4867 scope.go:117] "RemoveContainer" containerID="c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.944716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" event={"ID":"a2167905-2856-4125-81fd-a2430fe558f9","Type":"ContainerStarted","Data":"38c6662b08fabe3cef1dc40694336706463b0cc3f10032687613e445e214181e"} Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.944993 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:39:39 crc kubenswrapper[4867]: I0126 11:39:39.985551 4867 scope.go:117] "RemoveContainer" containerID="89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.016554 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.025195 4867 scope.go:117] "RemoveContainer" containerID="a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.029183 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.040728 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.041479 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-central-agent" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041526 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-central-agent" Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.041560 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-notification-agent" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041571 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-notification-agent" Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.041602 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="proxy-httpd" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041609 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="proxy-httpd" Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.041628 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="sg-core" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041636 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="sg-core" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041893 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-notification-agent" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041916 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="ceilometer-central-agent" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041930 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="sg-core" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.041945 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" containerName="proxy-httpd" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.043997 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.049415 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.052595 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.052892 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.052926 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.059156 4867 scope.go:117] "RemoveContainer" containerID="74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.082281 4867 scope.go:117] "RemoveContainer" containerID="c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f" Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.082809 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f\": container with ID starting with c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f not found: ID does not exist" containerID="c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.082857 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f"} err="failed to get container status \"c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f\": rpc error: code = NotFound desc = could not find container \"c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f\": container with ID starting with c7d55fbaef86017a19c096ed9ade770ccedb1a70e41e9a455853e6ad3c0a836f not found: ID does not exist" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.082883 4867 scope.go:117] "RemoveContainer" containerID="89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076" Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.083251 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076\": container with ID starting with 89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076 not found: ID does not exist" containerID="89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.083299 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076"} err="failed to get container status \"89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076\": rpc error: code = NotFound desc = could not find container \"89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076\": container with ID starting with 89e8f134853375e7d1532482938d4932b3ce767bd9f294d31f3634cca3374076 not found: ID does not exist" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.083328 4867 scope.go:117] "RemoveContainer" containerID="a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826" Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.083780 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826\": container with ID starting with a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826 not found: ID does not exist" containerID="a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.083812 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826"} err="failed to get container status \"a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826\": rpc error: code = NotFound desc = could not find container \"a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826\": container with ID starting with a4138ea0704d05d9bc6272966f2fdc4fd212a5c49b6d40cba3d616660c6a1826 not found: ID does not exist" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.083835 4867 scope.go:117] "RemoveContainer" containerID="74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0" Jan 26 11:39:40 crc kubenswrapper[4867]: E0126 11:39:40.084086 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0\": container with ID starting with 74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0 not found: ID does not exist" containerID="74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.084111 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0"} err="failed to get container status \"74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0\": rpc error: code = NotFound desc = could not find container \"74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0\": container with ID starting with 74d12bdd7eb3891c66ec91f027028bac0569805ab48cb79d601e2abc6c1009f0 not found: ID does not exist" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.116934 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.117309 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.121908 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227692 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-config-data\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-log-httpd\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227878 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxh44\" (UniqueName: \"kubernetes.io/projected/28c6ba40-9005-44eb-ba98-a34bbd586d3c-kube-api-access-rxh44\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227903 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-scripts\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.227918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-run-httpd\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-log-httpd\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329502 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxh44\" (UniqueName: \"kubernetes.io/projected/28c6ba40-9005-44eb-ba98-a34bbd586d3c-kube-api-access-rxh44\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329526 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-scripts\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-run-httpd\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-config-data\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.329657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.330115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-log-httpd\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.330660 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-run-httpd\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.334957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-scripts\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.335046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.335508 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.335710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.342662 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-config-data\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.347803 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxh44\" (UniqueName: \"kubernetes.io/projected/28c6ba40-9005-44eb-ba98-a34bbd586d3c-kube-api-access-rxh44\") pod \"ceilometer-0\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.363064 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.574509 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2527494-e18a-4e87-ab80-0b922ad79c65" path="/var/lib/kubelet/pods/f2527494-e18a-4e87-ab80-0b922ad79c65/volumes" Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.827800 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.953445 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerStarted","Data":"9e059a0ea4e8b5ab70fff195daecc2b2f34f1301b78d759f185fde7d5e3a9dd1"} Jan 26 11:39:40 crc kubenswrapper[4867]: I0126 11:39:40.960985 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:39:42 crc kubenswrapper[4867]: I0126 11:39:42.909595 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:42 crc kubenswrapper[4867]: I0126 11:39:42.978801 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerStarted","Data":"5741a90caa8978651ab1c442225976ab5e3a51cec859cba5e76a70dfed46eb84"} Jan 26 11:39:42 crc kubenswrapper[4867]: I0126 11:39:42.980830 4867 generic.go:334] "Generic (PLEG): container finished" podID="64ab41b4-2174-46ff-bb95-6c661105a1ec" containerID="39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7" exitCode=137 Jan 26 11:39:42 crc kubenswrapper[4867]: I0126 11:39:42.980894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64ab41b4-2174-46ff-bb95-6c661105a1ec","Type":"ContainerDied","Data":"39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7"} Jan 26 11:39:42 crc kubenswrapper[4867]: I0126 11:39:42.980933 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64ab41b4-2174-46ff-bb95-6c661105a1ec","Type":"ContainerDied","Data":"85f0c51763eea3a8512b4049aaf3f41f4de556a17023a3d47e2662590003cfa4"} Jan 26 11:39:42 crc kubenswrapper[4867]: I0126 11:39:42.980936 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:42 crc kubenswrapper[4867]: I0126 11:39:42.980958 4867 scope.go:117] "RemoveContainer" containerID="39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.065387 4867 scope.go:117] "RemoveContainer" containerID="39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7" Jan 26 11:39:43 crc kubenswrapper[4867]: E0126 11:39:43.065914 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7\": container with ID starting with 39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7 not found: ID does not exist" containerID="39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.065955 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7"} err="failed to get container status \"39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7\": rpc error: code = NotFound desc = could not find container \"39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7\": container with ID starting with 39646214301efaff24258c058db6b4913010fe424a5c011a7ca3633d6e73d1d7 not found: ID does not exist" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.079489 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-795fb7c76b-9ndwh" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.081131 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9zkg\" (UniqueName: \"kubernetes.io/projected/64ab41b4-2174-46ff-bb95-6c661105a1ec-kube-api-access-z9zkg\") pod \"64ab41b4-2174-46ff-bb95-6c661105a1ec\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.081324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-combined-ca-bundle\") pod \"64ab41b4-2174-46ff-bb95-6c661105a1ec\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.081651 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-config-data\") pod \"64ab41b4-2174-46ff-bb95-6c661105a1ec\" (UID: \"64ab41b4-2174-46ff-bb95-6c661105a1ec\") " Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.085778 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ab41b4-2174-46ff-bb95-6c661105a1ec-kube-api-access-z9zkg" (OuterVolumeSpecName: "kube-api-access-z9zkg") pod "64ab41b4-2174-46ff-bb95-6c661105a1ec" (UID: "64ab41b4-2174-46ff-bb95-6c661105a1ec"). InnerVolumeSpecName "kube-api-access-z9zkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.110388 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64ab41b4-2174-46ff-bb95-6c661105a1ec" (UID: "64ab41b4-2174-46ff-bb95-6c661105a1ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.134453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-config-data" (OuterVolumeSpecName: "config-data") pod "64ab41b4-2174-46ff-bb95-6c661105a1ec" (UID: "64ab41b4-2174-46ff-bb95-6c661105a1ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.184115 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.184158 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9zkg\" (UniqueName: \"kubernetes.io/projected/64ab41b4-2174-46ff-bb95-6c661105a1ec-kube-api-access-z9zkg\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.184173 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab41b4-2174-46ff-bb95-6c661105a1ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.328512 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.342076 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.366184 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:43 crc kubenswrapper[4867]: E0126 11:39:43.366634 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ab41b4-2174-46ff-bb95-6c661105a1ec" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.366652 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ab41b4-2174-46ff-bb95-6c661105a1ec" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.366898 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ab41b4-2174-46ff-bb95-6c661105a1ec" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.367577 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.370889 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.373990 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.375377 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.406619 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.511687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.511811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4zj\" (UniqueName: \"kubernetes.io/projected/dc6a54c4-4229-4157-a5a0-a2089d6a7131-kube-api-access-cx4zj\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.511839 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.511858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.512249 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.614824 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.615007 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.615152 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4zj\" (UniqueName: \"kubernetes.io/projected/dc6a54c4-4229-4157-a5a0-a2089d6a7131-kube-api-access-cx4zj\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.615179 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.615203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.620894 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.628252 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.629095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.632292 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6a54c4-4229-4157-a5a0-a2089d6a7131-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.659801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4zj\" (UniqueName: \"kubernetes.io/projected/dc6a54c4-4229-4157-a5a0-a2089d6a7131-kube-api-access-cx4zj\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc6a54c4-4229-4157-a5a0-a2089d6a7131\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:43 crc kubenswrapper[4867]: I0126 11:39:43.688998 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:44 crc kubenswrapper[4867]: I0126 11:39:44.005892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerStarted","Data":"5e09fd946b9eb04b4937f6f5d7c6696afc8d3b1d5920398e3c1753a42eb1f18e"} Jan 26 11:39:44 crc kubenswrapper[4867]: I0126 11:39:44.200758 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 11:39:44 crc kubenswrapper[4867]: W0126 11:39:44.210366 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc6a54c4_4229_4157_a5a0_a2089d6a7131.slice/crio-138e6035e519fd2de467016a303f7380e20e7b91f11b01d8489fd3f9651cb067 WatchSource:0}: Error finding container 138e6035e519fd2de467016a303f7380e20e7b91f11b01d8489fd3f9651cb067: Status 404 returned error can't find the container with id 138e6035e519fd2de467016a303f7380e20e7b91f11b01d8489fd3f9651cb067 Jan 26 11:39:44 crc kubenswrapper[4867]: I0126 11:39:44.432050 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 11:39:44 crc kubenswrapper[4867]: I0126 11:39:44.574977 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ab41b4-2174-46ff-bb95-6c661105a1ec" path="/var/lib/kubelet/pods/64ab41b4-2174-46ff-bb95-6c661105a1ec/volumes" Jan 26 11:39:45 crc kubenswrapper[4867]: I0126 11:39:45.018908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc6a54c4-4229-4157-a5a0-a2089d6a7131","Type":"ContainerStarted","Data":"dce8414dbf48c33b49a05850b3522862dd0d83e9ccc9f10a457c08b7f267109a"} Jan 26 11:39:45 crc kubenswrapper[4867]: I0126 11:39:45.018955 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc6a54c4-4229-4157-a5a0-a2089d6a7131","Type":"ContainerStarted","Data":"138e6035e519fd2de467016a303f7380e20e7b91f11b01d8489fd3f9651cb067"} Jan 26 11:39:45 crc kubenswrapper[4867]: I0126 11:39:45.024173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerStarted","Data":"8b3fa6e01a432cee38c177ef736b1a2c05228f369d103a702faf32b148996b57"} Jan 26 11:39:45 crc kubenswrapper[4867]: I0126 11:39:45.062003 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.061980351 podStartE2EDuration="2.061980351s" podCreationTimestamp="2026-01-26 11:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:45.04087597 +0000 UTC m=+1334.739450900" watchObservedRunningTime="2026-01-26 11:39:45.061980351 +0000 UTC m=+1334.760555281" Jan 26 11:39:46 crc kubenswrapper[4867]: I0126 11:39:46.034819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerStarted","Data":"ff768346f797cd880593c2a3e29b59c296c2bac2e125d4e76a6e0ae095c16cd4"} Jan 26 11:39:47 crc kubenswrapper[4867]: I0126 11:39:47.042816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:39:47 crc kubenswrapper[4867]: I0126 11:39:47.067801 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.866767206 podStartE2EDuration="7.067775474s" podCreationTimestamp="2026-01-26 11:39:40 +0000 UTC" firstStartedPulling="2026-01-26 11:39:40.838342801 +0000 UTC m=+1330.536917711" lastFinishedPulling="2026-01-26 11:39:45.039351069 +0000 UTC m=+1334.737925979" observedRunningTime="2026-01-26 11:39:47.061409179 +0000 UTC m=+1336.759984089" watchObservedRunningTime="2026-01-26 11:39:47.067775474 +0000 UTC m=+1336.766350384" Jan 26 11:39:47 crc kubenswrapper[4867]: I0126 11:39:47.250538 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:39:47 crc kubenswrapper[4867]: I0126 11:39:47.251779 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:39:47 crc kubenswrapper[4867]: I0126 11:39:47.255916 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:39:47 crc kubenswrapper[4867]: I0126 11:39:47.257506 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.050588 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.053867 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.213616 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-9fj8m"] Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.216282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.258167 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-9fj8m"] Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.315964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-config\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.316103 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.316199 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.316292 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9v47\" (UniqueName: \"kubernetes.io/projected/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-kube-api-access-g9v47\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.316346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.316423 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.418712 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-config\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.418792 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.418917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.419628 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-config\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.419919 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.420302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.420802 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9v47\" (UniqueName: \"kubernetes.io/projected/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-kube-api-access-g9v47\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.420884 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.420994 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.421521 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.422073 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.444907 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9v47\" (UniqueName: \"kubernetes.io/projected/facba8bd-34c0-43a2-a31b-cc7a6ff17ba2-kube-api-access-g9v47\") pod \"dnsmasq-dns-cd5cbd7b9-9fj8m\" (UID: \"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.540402 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:48 crc kubenswrapper[4867]: I0126 11:39:48.689171 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:49 crc kubenswrapper[4867]: I0126 11:39:49.039182 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-9fj8m"] Jan 26 11:39:49 crc kubenswrapper[4867]: I0126 11:39:49.049352 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Jan 26 11:39:50 crc kubenswrapper[4867]: I0126 11:39:50.072837 4867 generic.go:334] "Generic (PLEG): container finished" podID="facba8bd-34c0-43a2-a31b-cc7a6ff17ba2" containerID="e1aa66cbdbed113aac3ee11e81502b1681778ef1245941eda30ff1f4dc969a11" exitCode=0 Jan 26 11:39:50 crc kubenswrapper[4867]: I0126 11:39:50.072875 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" event={"ID":"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2","Type":"ContainerDied","Data":"e1aa66cbdbed113aac3ee11e81502b1681778ef1245941eda30ff1f4dc969a11"} Jan 26 11:39:50 crc kubenswrapper[4867]: I0126 11:39:50.073303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" event={"ID":"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2","Type":"ContainerStarted","Data":"b8cc1b1814b3ed619be467cb4dce5359abb91c05c72f45e55d32dc2ff28c5067"} Jan 26 11:39:50 crc kubenswrapper[4867]: I0126 11:39:50.792674 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.084678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" event={"ID":"facba8bd-34c0-43a2-a31b-cc7a6ff17ba2","Type":"ContainerStarted","Data":"d6dfe280a2c64b8104d9c60b11f1444270a5c49bcbda75f5e3b9c4ecdc1323cc"} Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.084834 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-log" containerID="cri-o://b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f" gracePeriod=30 Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.084898 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-api" containerID="cri-o://6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f" gracePeriod=30 Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.084958 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.113767 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" podStartSLOduration=3.113751746 podStartE2EDuration="3.113751746s" podCreationTimestamp="2026-01-26 11:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:51.103871925 +0000 UTC m=+1340.802446835" watchObservedRunningTime="2026-01-26 11:39:51.113751746 +0000 UTC m=+1340.812326656" Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.477161 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.477528 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-central-agent" containerID="cri-o://5741a90caa8978651ab1c442225976ab5e3a51cec859cba5e76a70dfed46eb84" gracePeriod=30 Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.477622 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="proxy-httpd" containerID="cri-o://ff768346f797cd880593c2a3e29b59c296c2bac2e125d4e76a6e0ae095c16cd4" gracePeriod=30 Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.477630 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="sg-core" containerID="cri-o://8b3fa6e01a432cee38c177ef736b1a2c05228f369d103a702faf32b148996b57" gracePeriod=30 Jan 26 11:39:51 crc kubenswrapper[4867]: I0126 11:39:51.477653 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-notification-agent" containerID="cri-o://5e09fd946b9eb04b4937f6f5d7c6696afc8d3b1d5920398e3c1753a42eb1f18e" gracePeriod=30 Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.098062 4867 generic.go:334] "Generic (PLEG): container finished" podID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerID="b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f" exitCode=143 Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.098153 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebbccea9-6788-4c17-b9f9-776d3e41b6f5","Type":"ContainerDied","Data":"b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f"} Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102115 4867 generic.go:334] "Generic (PLEG): container finished" podID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerID="ff768346f797cd880593c2a3e29b59c296c2bac2e125d4e76a6e0ae095c16cd4" exitCode=0 Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102165 4867 generic.go:334] "Generic (PLEG): container finished" podID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerID="8b3fa6e01a432cee38c177ef736b1a2c05228f369d103a702faf32b148996b57" exitCode=2 Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102176 4867 generic.go:334] "Generic (PLEG): container finished" podID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerID="5e09fd946b9eb04b4937f6f5d7c6696afc8d3b1d5920398e3c1753a42eb1f18e" exitCode=0 Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102198 4867 generic.go:334] "Generic (PLEG): container finished" podID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerID="5741a90caa8978651ab1c442225976ab5e3a51cec859cba5e76a70dfed46eb84" exitCode=0 Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102169 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerDied","Data":"ff768346f797cd880593c2a3e29b59c296c2bac2e125d4e76a6e0ae095c16cd4"} Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102274 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerDied","Data":"8b3fa6e01a432cee38c177ef736b1a2c05228f369d103a702faf32b148996b57"} Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerDied","Data":"5e09fd946b9eb04b4937f6f5d7c6696afc8d3b1d5920398e3c1753a42eb1f18e"} Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.102305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerDied","Data":"5741a90caa8978651ab1c442225976ab5e3a51cec859cba5e76a70dfed46eb84"} Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.324406 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.525424 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-ceilometer-tls-certs\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.525484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-log-httpd\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.525530 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-run-httpd\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.525904 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-config-data\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.526091 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.526100 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-scripts\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.526142 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.526154 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxh44\" (UniqueName: \"kubernetes.io/projected/28c6ba40-9005-44eb-ba98-a34bbd586d3c-kube-api-access-rxh44\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.526325 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-sg-core-conf-yaml\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.526412 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-combined-ca-bundle\") pod \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\" (UID: \"28c6ba40-9005-44eb-ba98-a34bbd586d3c\") " Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.527651 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.527692 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28c6ba40-9005-44eb-ba98-a34bbd586d3c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.532289 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c6ba40-9005-44eb-ba98-a34bbd586d3c-kube-api-access-rxh44" (OuterVolumeSpecName: "kube-api-access-rxh44") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "kube-api-access-rxh44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.556181 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-scripts" (OuterVolumeSpecName: "scripts") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.580798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.629842 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.629872 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.629883 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxh44\" (UniqueName: \"kubernetes.io/projected/28c6ba40-9005-44eb-ba98-a34bbd586d3c-kube-api-access-rxh44\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.639596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.654881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.674741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-config-data" (OuterVolumeSpecName: "config-data") pod "28c6ba40-9005-44eb-ba98-a34bbd586d3c" (UID: "28c6ba40-9005-44eb-ba98-a34bbd586d3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.731692 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.731867 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:52 crc kubenswrapper[4867]: I0126 11:39:52.731950 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c6ba40-9005-44eb-ba98-a34bbd586d3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.113973 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28c6ba40-9005-44eb-ba98-a34bbd586d3c","Type":"ContainerDied","Data":"9e059a0ea4e8b5ab70fff195daecc2b2f34f1301b78d759f185fde7d5e3a9dd1"} Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.114269 4867 scope.go:117] "RemoveContainer" containerID="ff768346f797cd880593c2a3e29b59c296c2bac2e125d4e76a6e0ae095c16cd4" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.114079 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.135273 4867 scope.go:117] "RemoveContainer" containerID="8b3fa6e01a432cee38c177ef736b1a2c05228f369d103a702faf32b148996b57" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.149150 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.154117 4867 scope.go:117] "RemoveContainer" containerID="5e09fd946b9eb04b4937f6f5d7c6696afc8d3b1d5920398e3c1753a42eb1f18e" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.160107 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.188361 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.188566 4867 scope.go:117] "RemoveContainer" containerID="5741a90caa8978651ab1c442225976ab5e3a51cec859cba5e76a70dfed46eb84" Jan 26 11:39:53 crc kubenswrapper[4867]: E0126 11:39:53.188864 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="proxy-httpd" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.188884 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="proxy-httpd" Jan 26 11:39:53 crc kubenswrapper[4867]: E0126 11:39:53.188919 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-central-agent" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.188931 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-central-agent" Jan 26 11:39:53 crc kubenswrapper[4867]: E0126 11:39:53.188970 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="sg-core" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.188979 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="sg-core" Jan 26 11:39:53 crc kubenswrapper[4867]: E0126 11:39:53.188995 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-notification-agent" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.189003 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-notification-agent" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.189206 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-central-agent" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.189252 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="sg-core" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.189271 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="proxy-httpd" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.189280 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" containerName="ceilometer-notification-agent" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.191295 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.196211 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.200054 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.200279 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.201592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-config-data\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240527 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-run-httpd\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-log-httpd\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-scripts\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.240708 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f598h\" (UniqueName: \"kubernetes.io/projected/c9913e77-804b-402a-9f2b-dd14c46c1cac-kube-api-access-f598h\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.332487 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:53 crc kubenswrapper[4867]: E0126 11:39:53.334120 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-f598h log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="c9913e77-804b-402a-9f2b-dd14c46c1cac" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.349939 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.350465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-run-httpd\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.350503 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-log-httpd\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.350531 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.350580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-scripts\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.350612 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.350819 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f598h\" (UniqueName: \"kubernetes.io/projected/c9913e77-804b-402a-9f2b-dd14c46c1cac-kube-api-access-f598h\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.350971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-config-data\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.351066 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-log-httpd\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.351303 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-run-httpd\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.355343 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.355582 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.355998 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-config-data\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.357612 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.357627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-scripts\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.367404 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f598h\" (UniqueName: \"kubernetes.io/projected/c9913e77-804b-402a-9f2b-dd14c46c1cac-kube-api-access-f598h\") pod \"ceilometer-0\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " pod="openstack/ceilometer-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.690130 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:53 crc kubenswrapper[4867]: I0126 11:39:53.709666 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.125897 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.137826 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.151782 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165029 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-run-httpd\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165073 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-sg-core-conf-yaml\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165132 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-scripts\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-ceilometer-tls-certs\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165184 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-log-httpd\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165206 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f598h\" (UniqueName: \"kubernetes.io/projected/c9913e77-804b-402a-9f2b-dd14c46c1cac-kube-api-access-f598h\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165261 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-config-data\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-combined-ca-bundle\") pod \"c9913e77-804b-402a-9f2b-dd14c46c1cac\" (UID: \"c9913e77-804b-402a-9f2b-dd14c46c1cac\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.165460 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.166585 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.167009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.173628 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9913e77-804b-402a-9f2b-dd14c46c1cac-kube-api-access-f598h" (OuterVolumeSpecName: "kube-api-access-f598h") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "kube-api-access-f598h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.174099 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-config-data" (OuterVolumeSpecName: "config-data") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.187630 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.189054 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-scripts" (OuterVolumeSpecName: "scripts") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.189162 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.195955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9913e77-804b-402a-9f2b-dd14c46c1cac" (UID: "c9913e77-804b-402a-9f2b-dd14c46c1cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.267412 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f598h\" (UniqueName: \"kubernetes.io/projected/c9913e77-804b-402a-9f2b-dd14c46c1cac-kube-api-access-f598h\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.267447 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.267462 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.267474 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.267484 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.267495 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9913e77-804b-402a-9f2b-dd14c46c1cac-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.267505 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9913e77-804b-402a-9f2b-dd14c46c1cac-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.302455 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-f7x8z"] Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.306051 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.309413 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.309645 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.320037 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-f7x8z"] Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.475713 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqmm\" (UniqueName: \"kubernetes.io/projected/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-kube-api-access-pnqmm\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.475897 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-scripts\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.475973 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-config-data\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.476040 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.576079 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c6ba40-9005-44eb-ba98-a34bbd586d3c" path="/var/lib/kubelet/pods/28c6ba40-9005-44eb-ba98-a34bbd586d3c/volumes" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.578111 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-scripts\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.578315 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-config-data\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.578451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.578591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnqmm\" (UniqueName: \"kubernetes.io/projected/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-kube-api-access-pnqmm\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.583846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.583852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-scripts\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.593779 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-config-data\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.597611 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnqmm\" (UniqueName: \"kubernetes.io/projected/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-kube-api-access-pnqmm\") pod \"nova-cell1-cell-mapping-f7x8z\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.663311 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.680611 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tdhv\" (UniqueName: \"kubernetes.io/projected/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-kube-api-access-4tdhv\") pod \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.680749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-config-data\") pod \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.680782 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-combined-ca-bundle\") pod \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.680836 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-logs\") pod \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\" (UID: \"ebbccea9-6788-4c17-b9f9-776d3e41b6f5\") " Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.681719 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-logs" (OuterVolumeSpecName: "logs") pod "ebbccea9-6788-4c17-b9f9-776d3e41b6f5" (UID: "ebbccea9-6788-4c17-b9f9-776d3e41b6f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.686247 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-kube-api-access-4tdhv" (OuterVolumeSpecName: "kube-api-access-4tdhv") pod "ebbccea9-6788-4c17-b9f9-776d3e41b6f5" (UID: "ebbccea9-6788-4c17-b9f9-776d3e41b6f5"). InnerVolumeSpecName "kube-api-access-4tdhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.716548 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.728169 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebbccea9-6788-4c17-b9f9-776d3e41b6f5" (UID: "ebbccea9-6788-4c17-b9f9-776d3e41b6f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.761482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-config-data" (OuterVolumeSpecName: "config-data") pod "ebbccea9-6788-4c17-b9f9-776d3e41b6f5" (UID: "ebbccea9-6788-4c17-b9f9-776d3e41b6f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.783024 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tdhv\" (UniqueName: \"kubernetes.io/projected/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-kube-api-access-4tdhv\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.783058 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.783067 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:54 crc kubenswrapper[4867]: I0126 11:39:54.783077 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebbccea9-6788-4c17-b9f9-776d3e41b6f5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.135021 4867 generic.go:334] "Generic (PLEG): container finished" podID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerID="6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f" exitCode=0 Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.135060 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.135079 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebbccea9-6788-4c17-b9f9-776d3e41b6f5","Type":"ContainerDied","Data":"6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f"} Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.136577 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ebbccea9-6788-4c17-b9f9-776d3e41b6f5","Type":"ContainerDied","Data":"19b24ea3a758bfef4ee0b1374001bcfe686c8a1eabeeae923b0873c346954690"} Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.136614 4867 scope.go:117] "RemoveContainer" containerID="6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.136644 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.166402 4867 scope.go:117] "RemoveContainer" containerID="b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.214508 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.215375 4867 scope.go:117] "RemoveContainer" containerID="6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f" Jan 26 11:39:55 crc kubenswrapper[4867]: E0126 11:39:55.215751 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f\": container with ID starting with 6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f not found: ID does not exist" containerID="6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.215781 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f"} err="failed to get container status \"6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f\": rpc error: code = NotFound desc = could not find container \"6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f\": container with ID starting with 6a05bee3cff38ae0acf1cf2764c7014efaf5f1a99c328e8923b64607ae6f8f3f not found: ID does not exist" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.215801 4867 scope.go:117] "RemoveContainer" containerID="b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f" Jan 26 11:39:55 crc kubenswrapper[4867]: E0126 11:39:55.217017 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f\": container with ID starting with b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f not found: ID does not exist" containerID="b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.217036 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f"} err="failed to get container status \"b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f\": rpc error: code = NotFound desc = could not find container \"b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f\": container with ID starting with b64f7a88cb71338ee06263c343e81ca1e1375e968dd3a89921436735fea6c32f not found: ID does not exist" Jan 26 11:39:55 crc kubenswrapper[4867]: W0126 11:39:55.217561 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4a7d7f8_2972_4e37_a039_f6cfd3d0fbaa.slice/crio-50d53f26c65801f149a6352cf1ed2f7d25ec7839727c2f0f3555725c4806fa28 WatchSource:0}: Error finding container 50d53f26c65801f149a6352cf1ed2f7d25ec7839727c2f0f3555725c4806fa28: Status 404 returned error can't find the container with id 50d53f26c65801f149a6352cf1ed2f7d25ec7839727c2f0f3555725c4806fa28 Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.258419 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-f7x8z"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.270292 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.281266 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.292678 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.304259 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: E0126 11:39:55.304654 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-log" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.304671 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-log" Jan 26 11:39:55 crc kubenswrapper[4867]: E0126 11:39:55.304708 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-api" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.304714 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-api" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.304887 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-api" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.304906 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" containerName="nova-api-log" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.306549 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.308632 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.308782 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.309008 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.314194 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.316068 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.318911 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.318933 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.318976 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.326712 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.338064 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.396791 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.396824 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e086a220-6ef2-4a71-8639-f75783c634e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.396917 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397002 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397102 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8gw\" (UniqueName: \"kubernetes.io/projected/e086a220-6ef2-4a71-8639-f75783c634e6-kube-api-access-qx8gw\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe61ad3e-159e-42e1-87cb-a548d9de9b27-logs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397233 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxkx5\" (UniqueName: \"kubernetes.io/projected/fe61ad3e-159e-42e1-87cb-a548d9de9b27-kube-api-access-jxkx5\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-config-data\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-scripts\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-config-data\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-public-tls-certs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397553 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.397578 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e086a220-6ef2-4a71-8639-f75783c634e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498491 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e086a220-6ef2-4a71-8639-f75783c634e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498532 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498622 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8gw\" (UniqueName: \"kubernetes.io/projected/e086a220-6ef2-4a71-8639-f75783c634e6-kube-api-access-qx8gw\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498654 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe61ad3e-159e-42e1-87cb-a548d9de9b27-logs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxkx5\" (UniqueName: \"kubernetes.io/projected/fe61ad3e-159e-42e1-87cb-a548d9de9b27-kube-api-access-jxkx5\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-config-data\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-scripts\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-config-data\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-public-tls-certs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498912 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e086a220-6ef2-4a71-8639-f75783c634e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.498986 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e086a220-6ef2-4a71-8639-f75783c634e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.499324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e086a220-6ef2-4a71-8639-f75783c634e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.499634 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe61ad3e-159e-42e1-87cb-a548d9de9b27-logs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.503763 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.504146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.504644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.505824 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.507544 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-config-data\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.507848 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-config-data\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.508854 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.514850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-public-tls-certs\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.517687 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e086a220-6ef2-4a71-8639-f75783c634e6-scripts\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.519911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxkx5\" (UniqueName: \"kubernetes.io/projected/fe61ad3e-159e-42e1-87cb-a548d9de9b27-kube-api-access-jxkx5\") pod \"nova-api-0\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " pod="openstack/nova-api-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.520364 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8gw\" (UniqueName: \"kubernetes.io/projected/e086a220-6ef2-4a71-8639-f75783c634e6-kube-api-access-qx8gw\") pod \"ceilometer-0\" (UID: \"e086a220-6ef2-4a71-8639-f75783c634e6\") " pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.722013 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 11:39:55 crc kubenswrapper[4867]: I0126 11:39:55.726603 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:39:56 crc kubenswrapper[4867]: I0126 11:39:56.147328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f7x8z" event={"ID":"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa","Type":"ContainerStarted","Data":"ce1da23e7255f27d9a6dcc9e3c5f233195dd856b1293f845768b80b723fafe8c"} Jan 26 11:39:56 crc kubenswrapper[4867]: I0126 11:39:56.147645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f7x8z" event={"ID":"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa","Type":"ContainerStarted","Data":"50d53f26c65801f149a6352cf1ed2f7d25ec7839727c2f0f3555725c4806fa28"} Jan 26 11:39:56 crc kubenswrapper[4867]: I0126 11:39:56.167336 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-f7x8z" podStartSLOduration=2.167318649 podStartE2EDuration="2.167318649s" podCreationTimestamp="2026-01-26 11:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:56.163393022 +0000 UTC m=+1345.861967932" watchObservedRunningTime="2026-01-26 11:39:56.167318649 +0000 UTC m=+1345.865893559" Jan 26 11:39:56 crc kubenswrapper[4867]: I0126 11:39:56.287575 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:39:56 crc kubenswrapper[4867]: I0126 11:39:56.295681 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 11:39:56 crc kubenswrapper[4867]: I0126 11:39:56.579920 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9913e77-804b-402a-9f2b-dd14c46c1cac" path="/var/lib/kubelet/pods/c9913e77-804b-402a-9f2b-dd14c46c1cac/volumes" Jan 26 11:39:56 crc kubenswrapper[4867]: I0126 11:39:56.580712 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbccea9-6788-4c17-b9f9-776d3e41b6f5" path="/var/lib/kubelet/pods/ebbccea9-6788-4c17-b9f9-776d3e41b6f5/volumes" Jan 26 11:39:57 crc kubenswrapper[4867]: I0126 11:39:57.167917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe61ad3e-159e-42e1-87cb-a548d9de9b27","Type":"ContainerStarted","Data":"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb"} Jan 26 11:39:57 crc kubenswrapper[4867]: I0126 11:39:57.168202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe61ad3e-159e-42e1-87cb-a548d9de9b27","Type":"ContainerStarted","Data":"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2"} Jan 26 11:39:57 crc kubenswrapper[4867]: I0126 11:39:57.168265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe61ad3e-159e-42e1-87cb-a548d9de9b27","Type":"ContainerStarted","Data":"1f09b212dba3b495e54b81a49349902137210b9123bae4ea4019f8cbf7866394"} Jan 26 11:39:57 crc kubenswrapper[4867]: I0126 11:39:57.172363 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e086a220-6ef2-4a71-8639-f75783c634e6","Type":"ContainerStarted","Data":"c2e906bbea23f17dfc58d36b782acf1e777ee9f9d0a3eef2f1554fffaf8a5f3a"} Jan 26 11:39:57 crc kubenswrapper[4867]: I0126 11:39:57.172405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e086a220-6ef2-4a71-8639-f75783c634e6","Type":"ContainerStarted","Data":"37d76048c4e01e03c45be91caf1136431f7e6f2abd0252a04a6d520faf3633c6"} Jan 26 11:39:57 crc kubenswrapper[4867]: I0126 11:39:57.200735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.20071543 podStartE2EDuration="2.20071543s" podCreationTimestamp="2026-01-26 11:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:39:57.19089236 +0000 UTC m=+1346.889467270" watchObservedRunningTime="2026-01-26 11:39:57.20071543 +0000 UTC m=+1346.899290340" Jan 26 11:39:58 crc kubenswrapper[4867]: I0126 11:39:58.195905 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e086a220-6ef2-4a71-8639-f75783c634e6","Type":"ContainerStarted","Data":"77d6c5f7d1af52790156ce594f29cc25cf59173c713e08b9175f32e65edf5cf8"} Jan 26 11:39:58 crc kubenswrapper[4867]: I0126 11:39:58.543410 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-9fj8m" Jan 26 11:39:58 crc kubenswrapper[4867]: I0126 11:39:58.646503 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9s9w5"] Jan 26 11:39:58 crc kubenswrapper[4867]: I0126 11:39:58.646763 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" podUID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerName="dnsmasq-dns" containerID="cri-o://5950f4325f46b7dfe43a0cf86c40c65704caaafa5dd9b340333c80fa44b9bbc1" gracePeriod=10 Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.205379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e086a220-6ef2-4a71-8639-f75783c634e6","Type":"ContainerStarted","Data":"e1df0554d487d52ff35d25f236716a5ed1ad2acdbb29498d8e8492900a63bccc"} Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.207118 4867 generic.go:334] "Generic (PLEG): container finished" podID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerID="5950f4325f46b7dfe43a0cf86c40c65704caaafa5dd9b340333c80fa44b9bbc1" exitCode=0 Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.207166 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" event={"ID":"c33aec01-bab9-4160-9f96-a290d8c67e54","Type":"ContainerDied","Data":"5950f4325f46b7dfe43a0cf86c40c65704caaafa5dd9b340333c80fa44b9bbc1"} Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.207200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" event={"ID":"c33aec01-bab9-4160-9f96-a290d8c67e54","Type":"ContainerDied","Data":"5af51dd48506048e98e4d6b0d223b7e568f6cab053d15dedfe693dea291f9659"} Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.207238 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af51dd48506048e98e4d6b0d223b7e568f6cab053d15dedfe693dea291f9659" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.221187 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.383130 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-config\") pod \"c33aec01-bab9-4160-9f96-a290d8c67e54\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.383186 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-nb\") pod \"c33aec01-bab9-4160-9f96-a290d8c67e54\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.383330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-svc\") pod \"c33aec01-bab9-4160-9f96-a290d8c67e54\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.383364 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n8f9\" (UniqueName: \"kubernetes.io/projected/c33aec01-bab9-4160-9f96-a290d8c67e54-kube-api-access-4n8f9\") pod \"c33aec01-bab9-4160-9f96-a290d8c67e54\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.383383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-sb\") pod \"c33aec01-bab9-4160-9f96-a290d8c67e54\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.383568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-swift-storage-0\") pod \"c33aec01-bab9-4160-9f96-a290d8c67e54\" (UID: \"c33aec01-bab9-4160-9f96-a290d8c67e54\") " Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.388477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33aec01-bab9-4160-9f96-a290d8c67e54-kube-api-access-4n8f9" (OuterVolumeSpecName: "kube-api-access-4n8f9") pod "c33aec01-bab9-4160-9f96-a290d8c67e54" (UID: "c33aec01-bab9-4160-9f96-a290d8c67e54"). InnerVolumeSpecName "kube-api-access-4n8f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.444568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c33aec01-bab9-4160-9f96-a290d8c67e54" (UID: "c33aec01-bab9-4160-9f96-a290d8c67e54"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.449846 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c33aec01-bab9-4160-9f96-a290d8c67e54" (UID: "c33aec01-bab9-4160-9f96-a290d8c67e54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.451088 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-config" (OuterVolumeSpecName: "config") pod "c33aec01-bab9-4160-9f96-a290d8c67e54" (UID: "c33aec01-bab9-4160-9f96-a290d8c67e54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.455436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c33aec01-bab9-4160-9f96-a290d8c67e54" (UID: "c33aec01-bab9-4160-9f96-a290d8c67e54"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.458195 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c33aec01-bab9-4160-9f96-a290d8c67e54" (UID: "c33aec01-bab9-4160-9f96-a290d8c67e54"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.485207 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.485295 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-config\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.485304 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.485336 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.485344 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n8f9\" (UniqueName: \"kubernetes.io/projected/c33aec01-bab9-4160-9f96-a290d8c67e54-kube-api-access-4n8f9\") on node \"crc\" DevicePath \"\"" Jan 26 11:39:59 crc kubenswrapper[4867]: I0126 11:39:59.485353 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c33aec01-bab9-4160-9f96-a290d8c67e54-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:00 crc kubenswrapper[4867]: I0126 11:40:00.223042 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9s9w5" Jan 26 11:40:00 crc kubenswrapper[4867]: I0126 11:40:00.272357 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9s9w5"] Jan 26 11:40:00 crc kubenswrapper[4867]: I0126 11:40:00.283621 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9s9w5"] Jan 26 11:40:00 crc kubenswrapper[4867]: E0126 11:40:00.289946 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc33aec01_bab9_4160_9f96_a290d8c67e54.slice\": RecentStats: unable to find data in memory cache]" Jan 26 11:40:00 crc kubenswrapper[4867]: I0126 11:40:00.593160 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33aec01-bab9-4160-9f96-a290d8c67e54" path="/var/lib/kubelet/pods/c33aec01-bab9-4160-9f96-a290d8c67e54/volumes" Jan 26 11:40:02 crc kubenswrapper[4867]: I0126 11:40:02.243273 4867 generic.go:334] "Generic (PLEG): container finished" podID="c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" containerID="ce1da23e7255f27d9a6dcc9e3c5f233195dd856b1293f845768b80b723fafe8c" exitCode=0 Jan 26 11:40:02 crc kubenswrapper[4867]: I0126 11:40:02.243382 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f7x8z" event={"ID":"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa","Type":"ContainerDied","Data":"ce1da23e7255f27d9a6dcc9e3c5f233195dd856b1293f845768b80b723fafe8c"} Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.255321 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e086a220-6ef2-4a71-8639-f75783c634e6","Type":"ContainerStarted","Data":"84b17c2f4ac4303f18387b9e468648f7d534a47c00a2116d00e314890a9e7e24"} Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.287330 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.588731995 podStartE2EDuration="8.287304794s" podCreationTimestamp="2026-01-26 11:39:55 +0000 UTC" firstStartedPulling="2026-01-26 11:39:56.299505896 +0000 UTC m=+1345.998080806" lastFinishedPulling="2026-01-26 11:40:01.998078695 +0000 UTC m=+1351.696653605" observedRunningTime="2026-01-26 11:40:03.280606169 +0000 UTC m=+1352.979181079" watchObservedRunningTime="2026-01-26 11:40:03.287304794 +0000 UTC m=+1352.985879704" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.619145 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.778014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-config-data\") pod \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.778178 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnqmm\" (UniqueName: \"kubernetes.io/projected/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-kube-api-access-pnqmm\") pod \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.778250 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-combined-ca-bundle\") pod \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.778333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-scripts\") pod \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\" (UID: \"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa\") " Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.785115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-scripts" (OuterVolumeSpecName: "scripts") pod "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" (UID: "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.795583 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-kube-api-access-pnqmm" (OuterVolumeSpecName: "kube-api-access-pnqmm") pod "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" (UID: "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa"). InnerVolumeSpecName "kube-api-access-pnqmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.813470 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-config-data" (OuterVolumeSpecName: "config-data") pod "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" (UID: "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.833973 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" (UID: "c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.880435 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnqmm\" (UniqueName: \"kubernetes.io/projected/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-kube-api-access-pnqmm\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.880470 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.880480 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:03 crc kubenswrapper[4867]: I0126 11:40:03.880488 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.266770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-f7x8z" event={"ID":"c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa","Type":"ContainerDied","Data":"50d53f26c65801f149a6352cf1ed2f7d25ec7839727c2f0f3555725c4806fa28"} Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.267063 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d53f26c65801f149a6352cf1ed2f7d25ec7839727c2f0f3555725c4806fa28" Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.267093 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.266805 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-f7x8z" Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.456670 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.457035 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-api" containerID="cri-o://acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb" gracePeriod=30 Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.456940 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-log" containerID="cri-o://1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2" gracePeriod=30 Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.482781 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.482996 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ab1d2459-a54a-4e06-a4e7-ff675b803bca" containerName="nova-scheduler-scheduler" containerID="cri-o://c3a8d188db5c5d65fc2dab522bda7fbf3015f1a39c0d63a844d0586cc2b11253" gracePeriod=30 Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.498472 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.499140 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-metadata" containerID="cri-o://e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb" gracePeriod=30 Jan 26 11:40:04 crc kubenswrapper[4867]: I0126 11:40:04.499588 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-log" containerID="cri-o://c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e" gracePeriod=30 Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.116037 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.205760 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxkx5\" (UniqueName: \"kubernetes.io/projected/fe61ad3e-159e-42e1-87cb-a548d9de9b27-kube-api-access-jxkx5\") pod \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.205915 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-combined-ca-bundle\") pod \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.206075 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-public-tls-certs\") pod \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.206811 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe61ad3e-159e-42e1-87cb-a548d9de9b27-logs\") pod \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.207118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe61ad3e-159e-42e1-87cb-a548d9de9b27-logs" (OuterVolumeSpecName: "logs") pod "fe61ad3e-159e-42e1-87cb-a548d9de9b27" (UID: "fe61ad3e-159e-42e1-87cb-a548d9de9b27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.207191 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-config-data\") pod \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.207534 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-internal-tls-certs\") pod \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\" (UID: \"fe61ad3e-159e-42e1-87cb-a548d9de9b27\") " Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.208346 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe61ad3e-159e-42e1-87cb-a548d9de9b27-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.242557 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe61ad3e-159e-42e1-87cb-a548d9de9b27-kube-api-access-jxkx5" (OuterVolumeSpecName: "kube-api-access-jxkx5") pod "fe61ad3e-159e-42e1-87cb-a548d9de9b27" (UID: "fe61ad3e-159e-42e1-87cb-a548d9de9b27"). InnerVolumeSpecName "kube-api-access-jxkx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.259725 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-config-data" (OuterVolumeSpecName: "config-data") pod "fe61ad3e-159e-42e1-87cb-a548d9de9b27" (UID: "fe61ad3e-159e-42e1-87cb-a548d9de9b27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.289732 4867 generic.go:334] "Generic (PLEG): container finished" podID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerID="acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb" exitCode=0 Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.289777 4867 generic.go:334] "Generic (PLEG): container finished" podID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerID="1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2" exitCode=143 Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.289826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe61ad3e-159e-42e1-87cb-a548d9de9b27","Type":"ContainerDied","Data":"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb"} Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.289857 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe61ad3e-159e-42e1-87cb-a548d9de9b27","Type":"ContainerDied","Data":"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2"} Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.289871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe61ad3e-159e-42e1-87cb-a548d9de9b27","Type":"ContainerDied","Data":"1f09b212dba3b495e54b81a49349902137210b9123bae4ea4019f8cbf7866394"} Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.289891 4867 scope.go:117] "RemoveContainer" containerID="acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.289928 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.299385 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fe61ad3e-159e-42e1-87cb-a548d9de9b27" (UID: "fe61ad3e-159e-42e1-87cb-a548d9de9b27"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.302596 4867 generic.go:334] "Generic (PLEG): container finished" podID="5e4436b8-df58-4b0b-9713-75976c443930" containerID="c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e" exitCode=143 Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.303159 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e4436b8-df58-4b0b-9713-75976c443930","Type":"ContainerDied","Data":"c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e"} Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.321589 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe61ad3e-159e-42e1-87cb-a548d9de9b27" (UID: "fe61ad3e-159e-42e1-87cb-a548d9de9b27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.325769 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.325828 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.325843 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.325856 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxkx5\" (UniqueName: \"kubernetes.io/projected/fe61ad3e-159e-42e1-87cb-a548d9de9b27-kube-api-access-jxkx5\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.346374 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe61ad3e-159e-42e1-87cb-a548d9de9b27" (UID: "fe61ad3e-159e-42e1-87cb-a548d9de9b27"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.354621 4867 scope.go:117] "RemoveContainer" containerID="1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.400929 4867 scope.go:117] "RemoveContainer" containerID="acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb" Jan 26 11:40:05 crc kubenswrapper[4867]: E0126 11:40:05.401429 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb\": container with ID starting with acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb not found: ID does not exist" containerID="acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.401490 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb"} err="failed to get container status \"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb\": rpc error: code = NotFound desc = could not find container \"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb\": container with ID starting with acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb not found: ID does not exist" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.401523 4867 scope.go:117] "RemoveContainer" containerID="1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2" Jan 26 11:40:05 crc kubenswrapper[4867]: E0126 11:40:05.406251 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2\": container with ID starting with 1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2 not found: ID does not exist" containerID="1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.406486 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2"} err="failed to get container status \"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2\": rpc error: code = NotFound desc = could not find container \"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2\": container with ID starting with 1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2 not found: ID does not exist" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.406574 4867 scope.go:117] "RemoveContainer" containerID="acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.407177 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb"} err="failed to get container status \"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb\": rpc error: code = NotFound desc = could not find container \"acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb\": container with ID starting with acb383a2ed707fa3b6a4f45ac94faeff129df90c84908224b12173bbcf25b4cb not found: ID does not exist" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.407245 4867 scope.go:117] "RemoveContainer" containerID="1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.407664 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2"} err="failed to get container status \"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2\": rpc error: code = NotFound desc = could not find container \"1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2\": container with ID starting with 1798544d5e103e595d2049bc72328e1a1001eee9d9c1df72ec31b922460597e2 not found: ID does not exist" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.428083 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe61ad3e-159e-42e1-87cb-a548d9de9b27-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.670216 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.680272 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.701652 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 11:40:05 crc kubenswrapper[4867]: E0126 11:40:05.702186 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" containerName="nova-manage" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702206 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" containerName="nova-manage" Jan 26 11:40:05 crc kubenswrapper[4867]: E0126 11:40:05.702249 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-log" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702259 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-log" Jan 26 11:40:05 crc kubenswrapper[4867]: E0126 11:40:05.702280 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-api" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702288 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-api" Jan 26 11:40:05 crc kubenswrapper[4867]: E0126 11:40:05.702305 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerName="init" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702312 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerName="init" Jan 26 11:40:05 crc kubenswrapper[4867]: E0126 11:40:05.702325 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerName="dnsmasq-dns" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702332 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerName="dnsmasq-dns" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702545 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" containerName="nova-manage" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702558 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-api" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702575 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33aec01-bab9-4160-9f96-a290d8c67e54" containerName="dnsmasq-dns" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.702587 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" containerName="nova-api-log" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.703795 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.705420 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.707181 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.708178 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.713932 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.783070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.783237 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-internal-tls-certs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.783293 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-public-tls-certs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.783345 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26glr\" (UniqueName: \"kubernetes.io/projected/738787a7-6f5f-48f1-8c43-ce02e88eb732-kube-api-access-26glr\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.783420 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738787a7-6f5f-48f1-8c43-ce02e88eb732-logs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.783473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-config-data\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.886484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.886554 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-internal-tls-certs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.886573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-public-tls-certs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.886603 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26glr\" (UniqueName: \"kubernetes.io/projected/738787a7-6f5f-48f1-8c43-ce02e88eb732-kube-api-access-26glr\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.886633 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738787a7-6f5f-48f1-8c43-ce02e88eb732-logs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.886657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-config-data\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.887734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738787a7-6f5f-48f1-8c43-ce02e88eb732-logs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.893069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.893110 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-config-data\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.893314 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-public-tls-certs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.898975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/738787a7-6f5f-48f1-8c43-ce02e88eb732-internal-tls-certs\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:05 crc kubenswrapper[4867]: I0126 11:40:05.902654 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26glr\" (UniqueName: \"kubernetes.io/projected/738787a7-6f5f-48f1-8c43-ce02e88eb732-kube-api-access-26glr\") pod \"nova-api-0\" (UID: \"738787a7-6f5f-48f1-8c43-ce02e88eb732\") " pod="openstack/nova-api-0" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.024526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.293763 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.294086 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.314852 4867 generic.go:334] "Generic (PLEG): container finished" podID="ab1d2459-a54a-4e06-a4e7-ff675b803bca" containerID="c3a8d188db5c5d65fc2dab522bda7fbf3015f1a39c0d63a844d0586cc2b11253" exitCode=0 Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.314905 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab1d2459-a54a-4e06-a4e7-ff675b803bca","Type":"ContainerDied","Data":"c3a8d188db5c5d65fc2dab522bda7fbf3015f1a39c0d63a844d0586cc2b11253"} Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.470897 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 11:40:06 crc kubenswrapper[4867]: W0126 11:40:06.475432 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod738787a7_6f5f_48f1_8c43_ce02e88eb732.slice/crio-10b9789f3edc9dc54f129b8791f6b0a79a36e71ea4984ff0328b51093deb8470 WatchSource:0}: Error finding container 10b9789f3edc9dc54f129b8791f6b0a79a36e71ea4984ff0328b51093deb8470: Status 404 returned error can't find the container with id 10b9789f3edc9dc54f129b8791f6b0a79a36e71ea4984ff0328b51093deb8470 Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.490434 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.503126 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-config-data\") pod \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.503321 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-combined-ca-bundle\") pod \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.503377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwl6\" (UniqueName: \"kubernetes.io/projected/ab1d2459-a54a-4e06-a4e7-ff675b803bca-kube-api-access-8fwl6\") pod \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\" (UID: \"ab1d2459-a54a-4e06-a4e7-ff675b803bca\") " Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.512839 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1d2459-a54a-4e06-a4e7-ff675b803bca-kube-api-access-8fwl6" (OuterVolumeSpecName: "kube-api-access-8fwl6") pod "ab1d2459-a54a-4e06-a4e7-ff675b803bca" (UID: "ab1d2459-a54a-4e06-a4e7-ff675b803bca"). InnerVolumeSpecName "kube-api-access-8fwl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.545124 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab1d2459-a54a-4e06-a4e7-ff675b803bca" (UID: "ab1d2459-a54a-4e06-a4e7-ff675b803bca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.558018 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-config-data" (OuterVolumeSpecName: "config-data") pod "ab1d2459-a54a-4e06-a4e7-ff675b803bca" (UID: "ab1d2459-a54a-4e06-a4e7-ff675b803bca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.588587 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe61ad3e-159e-42e1-87cb-a548d9de9b27" path="/var/lib/kubelet/pods/fe61ad3e-159e-42e1-87cb-a548d9de9b27/volumes" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.607482 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.607540 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d2459-a54a-4e06-a4e7-ff675b803bca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:06 crc kubenswrapper[4867]: I0126 11:40:06.607554 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwl6\" (UniqueName: \"kubernetes.io/projected/ab1d2459-a54a-4e06-a4e7-ff675b803bca-kube-api-access-8fwl6\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.336727 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"738787a7-6f5f-48f1-8c43-ce02e88eb732","Type":"ContainerStarted","Data":"34f1d21f67963f29297a737ab1dbb3a135220cc5d67c2ddb27cd51d0cbd6f084"} Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.338173 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"738787a7-6f5f-48f1-8c43-ce02e88eb732","Type":"ContainerStarted","Data":"c8b922836f0d1b92c4c000bf877a54178ca368f38ab310fc3e8b14ffa714800d"} Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.338262 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"738787a7-6f5f-48f1-8c43-ce02e88eb732","Type":"ContainerStarted","Data":"10b9789f3edc9dc54f129b8791f6b0a79a36e71ea4984ff0328b51093deb8470"} Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.341246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab1d2459-a54a-4e06-a4e7-ff675b803bca","Type":"ContainerDied","Data":"705c861d9f414811a3249df9540ff8151ec558745e0c477c5f08eee4edba3a95"} Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.341319 4867 scope.go:117] "RemoveContainer" containerID="c3a8d188db5c5d65fc2dab522bda7fbf3015f1a39c0d63a844d0586cc2b11253" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.341522 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.367394 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.367375004 podStartE2EDuration="2.367375004s" podCreationTimestamp="2026-01-26 11:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:40:07.364773102 +0000 UTC m=+1357.063348012" watchObservedRunningTime="2026-01-26 11:40:07.367375004 +0000 UTC m=+1357.065949914" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.400062 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.410130 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.425515 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:40:07 crc kubenswrapper[4867]: E0126 11:40:07.426147 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1d2459-a54a-4e06-a4e7-ff675b803bca" containerName="nova-scheduler-scheduler" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.426174 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1d2459-a54a-4e06-a4e7-ff675b803bca" containerName="nova-scheduler-scheduler" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.426934 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1d2459-a54a-4e06-a4e7-ff675b803bca" containerName="nova-scheduler-scheduler" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.427759 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.430620 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.438722 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.533495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nncq6\" (UniqueName: \"kubernetes.io/projected/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-kube-api-access-nncq6\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.533561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.533649 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-config-data\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.637672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nncq6\" (UniqueName: \"kubernetes.io/projected/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-kube-api-access-nncq6\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.637724 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.637793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-config-data\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.642646 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.644889 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-config-data\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.656811 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nncq6\" (UniqueName: \"kubernetes.io/projected/b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a-kube-api-access-nncq6\") pod \"nova-scheduler-0\" (UID: \"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a\") " pod="openstack/nova-scheduler-0" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.668496 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:51762->10.217.0.201:8775: read: connection reset by peer" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.668485 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:51748->10.217.0.201:8775: read: connection reset by peer" Jan 26 11:40:07 crc kubenswrapper[4867]: I0126 11:40:07.750506 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.092083 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.260348 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.261520 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-nova-metadata-tls-certs\") pod \"5e4436b8-df58-4b0b-9713-75976c443930\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.261769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e4436b8-df58-4b0b-9713-75976c443930-logs\") pod \"5e4436b8-df58-4b0b-9713-75976c443930\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.261940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2vd9\" (UniqueName: \"kubernetes.io/projected/5e4436b8-df58-4b0b-9713-75976c443930-kube-api-access-b2vd9\") pod \"5e4436b8-df58-4b0b-9713-75976c443930\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.262109 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-config-data\") pod \"5e4436b8-df58-4b0b-9713-75976c443930\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.262320 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-combined-ca-bundle\") pod \"5e4436b8-df58-4b0b-9713-75976c443930\" (UID: \"5e4436b8-df58-4b0b-9713-75976c443930\") " Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.262503 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4436b8-df58-4b0b-9713-75976c443930-logs" (OuterVolumeSpecName: "logs") pod "5e4436b8-df58-4b0b-9713-75976c443930" (UID: "5e4436b8-df58-4b0b-9713-75976c443930"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.263107 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e4436b8-df58-4b0b-9713-75976c443930-logs\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.267294 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4436b8-df58-4b0b-9713-75976c443930-kube-api-access-b2vd9" (OuterVolumeSpecName: "kube-api-access-b2vd9") pod "5e4436b8-df58-4b0b-9713-75976c443930" (UID: "5e4436b8-df58-4b0b-9713-75976c443930"). InnerVolumeSpecName "kube-api-access-b2vd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.291503 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e4436b8-df58-4b0b-9713-75976c443930" (UID: "5e4436b8-df58-4b0b-9713-75976c443930"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.301594 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-config-data" (OuterVolumeSpecName: "config-data") pod "5e4436b8-df58-4b0b-9713-75976c443930" (UID: "5e4436b8-df58-4b0b-9713-75976c443930"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.319024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5e4436b8-df58-4b0b-9713-75976c443930" (UID: "5e4436b8-df58-4b0b-9713-75976c443930"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.354908 4867 generic.go:334] "Generic (PLEG): container finished" podID="5e4436b8-df58-4b0b-9713-75976c443930" containerID="e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb" exitCode=0 Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.354984 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.354991 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e4436b8-df58-4b0b-9713-75976c443930","Type":"ContainerDied","Data":"e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb"} Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.355101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e4436b8-df58-4b0b-9713-75976c443930","Type":"ContainerDied","Data":"f5e96adbfdc91fee4eaae52f148789d573abdea3c6dae621c5097cf3ac3ad514"} Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.355123 4867 scope.go:117] "RemoveContainer" containerID="e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.357111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a","Type":"ContainerStarted","Data":"97779c35ec8722762ba55122debc3720e640ee15709a6316f65df01266ba614a"} Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.364867 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.364892 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.364902 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4436b8-df58-4b0b-9713-75976c443930-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.364911 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2vd9\" (UniqueName: \"kubernetes.io/projected/5e4436b8-df58-4b0b-9713-75976c443930-kube-api-access-b2vd9\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.387691 4867 scope.go:117] "RemoveContainer" containerID="c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.409487 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.424532 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.427153 4867 scope.go:117] "RemoveContainer" containerID="e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb" Jan 26 11:40:08 crc kubenswrapper[4867]: E0126 11:40:08.428146 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb\": container with ID starting with e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb not found: ID does not exist" containerID="e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.428183 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb"} err="failed to get container status \"e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb\": rpc error: code = NotFound desc = could not find container \"e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb\": container with ID starting with e96574ea6be1a924b821331b274e4053554cf5b00b23b4254b451859b33e7fdb not found: ID does not exist" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.428209 4867 scope.go:117] "RemoveContainer" containerID="c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e" Jan 26 11:40:08 crc kubenswrapper[4867]: E0126 11:40:08.428539 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e\": container with ID starting with c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e not found: ID does not exist" containerID="c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.428569 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e"} err="failed to get container status \"c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e\": rpc error: code = NotFound desc = could not find container \"c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e\": container with ID starting with c6d747726bdb20530b3089ace5657ccb8ee82c93314c4b33fe037174ac02f13e not found: ID does not exist" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.436778 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:40:08 crc kubenswrapper[4867]: E0126 11:40:08.437237 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-log" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.437265 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-log" Jan 26 11:40:08 crc kubenswrapper[4867]: E0126 11:40:08.437305 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-metadata" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.437321 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-metadata" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.437541 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-metadata" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.437574 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4436b8-df58-4b0b-9713-75976c443930" containerName="nova-metadata-log" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.438808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.441116 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.442360 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.446449 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.567659 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.568060 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-config-data\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.568161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9kh\" (UniqueName: \"kubernetes.io/projected/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-kube-api-access-zm9kh\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.568265 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.568339 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-logs\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.582022 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4436b8-df58-4b0b-9713-75976c443930" path="/var/lib/kubelet/pods/5e4436b8-df58-4b0b-9713-75976c443930/volumes" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.582842 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1d2459-a54a-4e06-a4e7-ff675b803bca" path="/var/lib/kubelet/pods/ab1d2459-a54a-4e06-a4e7-ff675b803bca/volumes" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.669570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.670846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-config-data\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.670885 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9kh\" (UniqueName: \"kubernetes.io/projected/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-kube-api-access-zm9kh\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.670925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.670948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-logs\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.671355 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-logs\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.674172 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.674515 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-config-data\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.674807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.696790 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9kh\" (UniqueName: \"kubernetes.io/projected/5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7-kube-api-access-zm9kh\") pod \"nova-metadata-0\" (UID: \"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7\") " pod="openstack/nova-metadata-0" Jan 26 11:40:08 crc kubenswrapper[4867]: I0126 11:40:08.767116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 11:40:09 crc kubenswrapper[4867]: I0126 11:40:09.255282 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 11:40:09 crc kubenswrapper[4867]: I0126 11:40:09.370395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a","Type":"ContainerStarted","Data":"debdd630667130125a3a7116e62e1f007595e4b832841b2fafc6c90bb2c749b7"} Jan 26 11:40:09 crc kubenswrapper[4867]: I0126 11:40:09.371962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7","Type":"ContainerStarted","Data":"12112b1b686fac6a419eaaf632f559011ce231d9b6315ed7fb3d5e3b40f3a179"} Jan 26 11:40:09 crc kubenswrapper[4867]: I0126 11:40:09.387281 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.387259404 podStartE2EDuration="2.387259404s" podCreationTimestamp="2026-01-26 11:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:40:09.384130288 +0000 UTC m=+1359.082705208" watchObservedRunningTime="2026-01-26 11:40:09.387259404 +0000 UTC m=+1359.085834334" Jan 26 11:40:10 crc kubenswrapper[4867]: I0126 11:40:10.383006 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7","Type":"ContainerStarted","Data":"5428264eb9f41805886ef5c4c2727bd8752a73eb5cf1758530f9d65e53e73291"} Jan 26 11:40:10 crc kubenswrapper[4867]: I0126 11:40:10.383351 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7","Type":"ContainerStarted","Data":"7883bb4b997c8927105ed4cf9132396cd9568a70f9c20896bbeded8a0cf97997"} Jan 26 11:40:10 crc kubenswrapper[4867]: I0126 11:40:10.410182 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.410159084 podStartE2EDuration="2.410159084s" podCreationTimestamp="2026-01-26 11:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:40:10.402791163 +0000 UTC m=+1360.101366073" watchObservedRunningTime="2026-01-26 11:40:10.410159084 +0000 UTC m=+1360.108734004" Jan 26 11:40:12 crc kubenswrapper[4867]: I0126 11:40:12.750804 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 11:40:13 crc kubenswrapper[4867]: I0126 11:40:13.768621 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:40:13 crc kubenswrapper[4867]: I0126 11:40:13.768919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 11:40:16 crc kubenswrapper[4867]: I0126 11:40:16.025827 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:40:16 crc kubenswrapper[4867]: I0126 11:40:16.026353 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 11:40:17 crc kubenswrapper[4867]: I0126 11:40:17.041535 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="738787a7-6f5f-48f1-8c43-ce02e88eb732" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:40:17 crc kubenswrapper[4867]: I0126 11:40:17.041532 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="738787a7-6f5f-48f1-8c43-ce02e88eb732" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:40:17 crc kubenswrapper[4867]: I0126 11:40:17.751726 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 11:40:17 crc kubenswrapper[4867]: I0126 11:40:17.786205 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.382950 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8l4mk"] Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.385615 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.395794 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l4mk"] Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.455128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvww\" (UniqueName: \"kubernetes.io/projected/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-kube-api-access-8vvww\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.455187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-catalog-content\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.455303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-utilities\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.493380 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.557602 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvww\" (UniqueName: \"kubernetes.io/projected/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-kube-api-access-8vvww\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.557690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-catalog-content\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.558385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-catalog-content\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.558637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-utilities\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.558769 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-utilities\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.579891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvww\" (UniqueName: \"kubernetes.io/projected/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-kube-api-access-8vvww\") pod \"redhat-operators-8l4mk\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.716133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.768619 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:40:18 crc kubenswrapper[4867]: I0126 11:40:18.768664 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 11:40:19 crc kubenswrapper[4867]: I0126 11:40:19.170190 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l4mk"] Jan 26 11:40:19 crc kubenswrapper[4867]: I0126 11:40:19.478568 4867 generic.go:334] "Generic (PLEG): container finished" podID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerID="8f5fff400ff1311d79aa26d4ad427999cda55c9215a75c4d5660670cdf09962b" exitCode=0 Jan 26 11:40:19 crc kubenswrapper[4867]: I0126 11:40:19.478678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l4mk" event={"ID":"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9","Type":"ContainerDied","Data":"8f5fff400ff1311d79aa26d4ad427999cda55c9215a75c4d5660670cdf09962b"} Jan 26 11:40:19 crc kubenswrapper[4867]: I0126 11:40:19.478748 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l4mk" event={"ID":"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9","Type":"ContainerStarted","Data":"0eac9ab38a1d9169a9a8a8f6753ec6254b9135dca5c3dd540b22909537b1198e"} Jan 26 11:40:19 crc kubenswrapper[4867]: I0126 11:40:19.783508 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:40:19 crc kubenswrapper[4867]: I0126 11:40:19.783550 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 11:40:20 crc kubenswrapper[4867]: I0126 11:40:20.498756 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l4mk" event={"ID":"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9","Type":"ContainerStarted","Data":"3fa995ef601388e8736a9d4aa20808b116cc7d1e49d81f0ea5e293c686cde4bc"} Jan 26 11:40:21 crc kubenswrapper[4867]: I0126 11:40:21.507597 4867 generic.go:334] "Generic (PLEG): container finished" podID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerID="3fa995ef601388e8736a9d4aa20808b116cc7d1e49d81f0ea5e293c686cde4bc" exitCode=0 Jan 26 11:40:21 crc kubenswrapper[4867]: I0126 11:40:21.507660 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l4mk" event={"ID":"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9","Type":"ContainerDied","Data":"3fa995ef601388e8736a9d4aa20808b116cc7d1e49d81f0ea5e293c686cde4bc"} Jan 26 11:40:23 crc kubenswrapper[4867]: I0126 11:40:23.537599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l4mk" event={"ID":"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9","Type":"ContainerStarted","Data":"3c31506024059b21eb8d0fcc86edcf4fb85ca1086da41d7b7e27691b90e98d34"} Jan 26 11:40:24 crc kubenswrapper[4867]: I0126 11:40:24.576378 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8l4mk" podStartSLOduration=3.74173029 podStartE2EDuration="6.576334264s" podCreationTimestamp="2026-01-26 11:40:18 +0000 UTC" firstStartedPulling="2026-01-26 11:40:19.480162003 +0000 UTC m=+1369.178736913" lastFinishedPulling="2026-01-26 11:40:22.314765987 +0000 UTC m=+1372.013340887" observedRunningTime="2026-01-26 11:40:24.563470943 +0000 UTC m=+1374.262045853" watchObservedRunningTime="2026-01-26 11:40:24.576334264 +0000 UTC m=+1374.274909194" Jan 26 11:40:25 crc kubenswrapper[4867]: I0126 11:40:25.739104 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 11:40:26 crc kubenswrapper[4867]: I0126 11:40:26.034212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:40:26 crc kubenswrapper[4867]: I0126 11:40:26.035148 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:40:26 crc kubenswrapper[4867]: I0126 11:40:26.036355 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 11:40:26 crc kubenswrapper[4867]: I0126 11:40:26.043206 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:40:26 crc kubenswrapper[4867]: I0126 11:40:26.581358 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 11:40:26 crc kubenswrapper[4867]: I0126 11:40:26.586267 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 11:40:28 crc kubenswrapper[4867]: I0126 11:40:28.716867 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:28 crc kubenswrapper[4867]: I0126 11:40:28.717187 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:28 crc kubenswrapper[4867]: I0126 11:40:28.774446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:40:28 crc kubenswrapper[4867]: I0126 11:40:28.774507 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 11:40:28 crc kubenswrapper[4867]: I0126 11:40:28.782054 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:40:28 crc kubenswrapper[4867]: I0126 11:40:28.788610 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 11:40:29 crc kubenswrapper[4867]: I0126 11:40:29.764068 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8l4mk" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="registry-server" probeResult="failure" output=< Jan 26 11:40:29 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 26 11:40:29 crc kubenswrapper[4867]: > Jan 26 11:40:35 crc kubenswrapper[4867]: I0126 11:40:35.094590 4867 scope.go:117] "RemoveContainer" containerID="5104925c464c900f0a61c07dbae86814021cd266d041a72b5ebe48e27cc79358" Jan 26 11:40:35 crc kubenswrapper[4867]: I0126 11:40:35.118056 4867 scope.go:117] "RemoveContainer" containerID="f358ddb26cc3fc434a3670b35bddf4bd9a57bc5316363ebf0db4cb3923092a67" Jan 26 11:40:35 crc kubenswrapper[4867]: I0126 11:40:35.141125 4867 scope.go:117] "RemoveContainer" containerID="e508bdf6afaf27b048ea3afe7df0c754a885231a1f39c9c9ac29e52aaec7ca8d" Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.294049 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.294394 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.294437 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.295079 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bb9fd5acba776380a6fa3e3d00855cfc048bc467ccbd9a88cd7ca74eccbe67f"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.295122 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://6bb9fd5acba776380a6fa3e3d00855cfc048bc467ccbd9a88cd7ca74eccbe67f" gracePeriod=600 Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.683211 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="6bb9fd5acba776380a6fa3e3d00855cfc048bc467ccbd9a88cd7ca74eccbe67f" exitCode=0 Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.683267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"6bb9fd5acba776380a6fa3e3d00855cfc048bc467ccbd9a88cd7ca74eccbe67f"} Jan 26 11:40:36 crc kubenswrapper[4867]: I0126 11:40:36.683331 4867 scope.go:117] "RemoveContainer" containerID="510e7b8815f2e10ccb07bd14d3cace2ddac464c7ed9719497ae9e906b65ef061" Jan 26 11:40:37 crc kubenswrapper[4867]: I0126 11:40:37.439162 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:40:37 crc kubenswrapper[4867]: I0126 11:40:37.709692 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a"} Jan 26 11:40:38 crc kubenswrapper[4867]: I0126 11:40:38.335814 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:40:38 crc kubenswrapper[4867]: I0126 11:40:38.792192 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:38 crc kubenswrapper[4867]: I0126 11:40:38.860947 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:39 crc kubenswrapper[4867]: I0126 11:40:39.034681 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l4mk"] Jan 26 11:40:40 crc kubenswrapper[4867]: I0126 11:40:40.756325 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8l4mk" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="registry-server" containerID="cri-o://3c31506024059b21eb8d0fcc86edcf4fb85ca1086da41d7b7e27691b90e98d34" gracePeriod=2 Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.771441 4867 generic.go:334] "Generic (PLEG): container finished" podID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerID="3c31506024059b21eb8d0fcc86edcf4fb85ca1086da41d7b7e27691b90e98d34" exitCode=0 Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.771602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l4mk" event={"ID":"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9","Type":"ContainerDied","Data":"3c31506024059b21eb8d0fcc86edcf4fb85ca1086da41d7b7e27691b90e98d34"} Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.772115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l4mk" event={"ID":"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9","Type":"ContainerDied","Data":"0eac9ab38a1d9169a9a8a8f6753ec6254b9135dca5c3dd540b22909537b1198e"} Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.772136 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eac9ab38a1d9169a9a8a8f6753ec6254b9135dca5c3dd540b22909537b1198e" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.813729 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.961006 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-utilities\") pod \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.961189 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-catalog-content\") pod \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.961241 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vvww\" (UniqueName: \"kubernetes.io/projected/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-kube-api-access-8vvww\") pod \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\" (UID: \"5dfe4147-4bed-46d7-83b3-71f65dd5e6c9\") " Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.961983 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-utilities" (OuterVolumeSpecName: "utilities") pod "5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" (UID: "5dfe4147-4bed-46d7-83b3-71f65dd5e6c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:41.980762 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-kube-api-access-8vvww" (OuterVolumeSpecName: "kube-api-access-8vvww") pod "5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" (UID: "5dfe4147-4bed-46d7-83b3-71f65dd5e6c9"). InnerVolumeSpecName "kube-api-access-8vvww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.071463 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.071630 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vvww\" (UniqueName: \"kubernetes.io/projected/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-kube-api-access-8vvww\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.138891 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" (UID: "5dfe4147-4bed-46d7-83b3-71f65dd5e6c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.174103 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.350787 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerName="rabbitmq" containerID="cri-o://cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606" gracePeriod=604796 Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.464520 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerName="rabbitmq" containerID="cri-o://e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3" gracePeriod=604796 Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.781084 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l4mk" Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.816039 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l4mk"] Jan 26 11:40:42 crc kubenswrapper[4867]: I0126 11:40:42.828753 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8l4mk"] Jan 26 11:40:44 crc kubenswrapper[4867]: I0126 11:40:44.581049 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" path="/var/lib/kubelet/pods/5dfe4147-4bed-46d7-83b3-71f65dd5e6c9/volumes" Jan 26 11:40:47 crc kubenswrapper[4867]: I0126 11:40:47.145336 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 26 11:40:47 crc kubenswrapper[4867]: I0126 11:40:47.519528 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.295679 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.314880 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.331593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-server-conf\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.331651 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-erlang-cookie\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.331690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-plugins\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.331736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-plugins\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.331782 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-kube-api-access-s5m6m\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332282 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332647 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332661 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-confd\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332724 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-erlang-cookie-secret\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332790 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-plugins-conf\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332895 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332915 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332942 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-pod-info\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.332963 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333007 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e582495-d650-404c-9a13-d28ea98ecbc5-pod-info\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333038 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k886p\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-kube-api-access-k886p\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-config-data\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333089 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-tls\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333131 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-server-conf\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333263 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-tls\") pod \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\" (UID: \"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-erlang-cookie\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333401 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-plugins-conf\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333432 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-config-data\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333462 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-confd\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.333489 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e582495-d650-404c-9a13-d28ea98ecbc5-erlang-cookie-secret\") pod \"2e582495-d650-404c-9a13-d28ea98ecbc5\" (UID: \"2e582495-d650-404c-9a13-d28ea98ecbc5\") " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.334617 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.334641 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.334652 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.346786 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2e582495-d650-404c-9a13-d28ea98ecbc5-pod-info" (OuterVolumeSpecName: "pod-info") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.347423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.349063 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.349779 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.352728 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e582495-d650-404c-9a13-d28ea98ecbc5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.352890 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.359304 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.366640 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.370656 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-kube-api-access-s5m6m" (OuterVolumeSpecName: "kube-api-access-s5m6m") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "kube-api-access-s5m6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.375453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-pod-info" (OuterVolumeSpecName: "pod-info") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.377875 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.417298 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.418372 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-kube-api-access-k886p" (OuterVolumeSpecName: "kube-api-access-k886p") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "kube-api-access-k886p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437403 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437440 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e582495-d650-404c-9a13-d28ea98ecbc5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437455 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-kube-api-access-s5m6m\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437465 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437478 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437515 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437525 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437539 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437548 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e582495-d650-404c-9a13-d28ea98ecbc5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437557 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k886p\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-kube-api-access-k886p\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437565 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437573 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.437582 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.453435 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-config-data" (OuterVolumeSpecName: "config-data") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.458391 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-config-data" (OuterVolumeSpecName: "config-data") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.473633 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.498824 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.514628 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-server-conf" (OuterVolumeSpecName: "server-conf") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.515790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-server-conf" (OuterVolumeSpecName: "server-conf") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.539125 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.539175 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.539188 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.539198 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.539255 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.539269 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e582495-d650-404c-9a13-d28ea98ecbc5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.556717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2e582495-d650-404c-9a13-d28ea98ecbc5" (UID: "2e582495-d650-404c-9a13-d28ea98ecbc5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.606897 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" (UID: "4d2bfda4-48fc-4d87-94ae-3b53adc90a3a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.641211 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e582495-d650-404c-9a13-d28ea98ecbc5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.641281 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.849713 4867 generic.go:334] "Generic (PLEG): container finished" podID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerID="cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606" exitCode=0 Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.850006 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a","Type":"ContainerDied","Data":"cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606"} Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.850050 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4d2bfda4-48fc-4d87-94ae-3b53adc90a3a","Type":"ContainerDied","Data":"832e3b538427b998adfa059ea07c5b40cef09c3004e17851f91573c4f9289936"} Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.850072 4867 scope.go:117] "RemoveContainer" containerID="cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.850347 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.854776 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerID="e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3" exitCode=0 Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.854837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2e582495-d650-404c-9a13-d28ea98ecbc5","Type":"ContainerDied","Data":"e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3"} Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.854868 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.854879 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2e582495-d650-404c-9a13-d28ea98ecbc5","Type":"ContainerDied","Data":"ea9b63542120a636abe1b7d6d1b0befd7465eee31a9c5478d8bfb8cc991bba19"} Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.904611 4867 scope.go:117] "RemoveContainer" containerID="b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.931947 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.951020 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.964653 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:40:49 crc kubenswrapper[4867]: E0126 11:40:49.965179 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="registry-server" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.965205 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="registry-server" Jan 26 11:40:49 crc kubenswrapper[4867]: E0126 11:40:49.965237 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="extract-utilities" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.965246 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="extract-utilities" Jan 26 11:40:49 crc kubenswrapper[4867]: E0126 11:40:49.965258 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="extract-content" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.965265 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="extract-content" Jan 26 11:40:49 crc kubenswrapper[4867]: E0126 11:40:49.965280 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerName="rabbitmq" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.980114 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerName="rabbitmq" Jan 26 11:40:49 crc kubenswrapper[4867]: E0126 11:40:49.980173 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerName="setup-container" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.980186 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerName="setup-container" Jan 26 11:40:49 crc kubenswrapper[4867]: E0126 11:40:49.980247 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerName="rabbitmq" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.980255 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerName="rabbitmq" Jan 26 11:40:49 crc kubenswrapper[4867]: E0126 11:40:49.980276 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerName="setup-container" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.980283 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerName="setup-container" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.980677 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfe4147-4bed-46d7-83b3-71f65dd5e6c9" containerName="registry-server" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.980718 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" containerName="rabbitmq" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.980734 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" containerName="rabbitmq" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.981963 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.984984 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.985066 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.985160 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.985362 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.985499 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.985623 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 11:40:49 crc kubenswrapper[4867]: I0126 11:40:49.987404 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9lq9k" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.010660 4867 scope.go:117] "RemoveContainer" containerID="cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.011078 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:40:50 crc kubenswrapper[4867]: E0126 11:40:50.011481 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606\": container with ID starting with cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606 not found: ID does not exist" containerID="cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.011508 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606"} err="failed to get container status \"cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606\": rpc error: code = NotFound desc = could not find container \"cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606\": container with ID starting with cdb1a2cffa8951dd7dbaede56a4870af51c7e47860e39482d1c2d0cd8e9f4606 not found: ID does not exist" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.011529 4867 scope.go:117] "RemoveContainer" containerID="b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda" Jan 26 11:40:50 crc kubenswrapper[4867]: E0126 11:40:50.011710 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda\": container with ID starting with b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda not found: ID does not exist" containerID="b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.011731 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda"} err="failed to get container status \"b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda\": rpc error: code = NotFound desc = could not find container \"b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda\": container with ID starting with b2cdafe3e00677646dd69530266947366f273aac1a046750a5e001a7513bbeda not found: ID does not exist" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.011745 4867 scope.go:117] "RemoveContainer" containerID="e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.047943 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.074354 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.102932 4867 scope.go:117] "RemoveContainer" containerID="226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.142102 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.142196 4867 scope.go:117] "RemoveContainer" containerID="e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.143945 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.147942 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.148405 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.148734 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 11:40:50 crc kubenswrapper[4867]: E0126 11:40:50.149076 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3\": container with ID starting with e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3 not found: ID does not exist" containerID="e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.149131 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3"} err="failed to get container status \"e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3\": rpc error: code = NotFound desc = could not find container \"e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3\": container with ID starting with e533084a0585a61f55ac4afe544fd654512dc04984b3f63f54a5b924940e17b3 not found: ID does not exist" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.149164 4867 scope.go:117] "RemoveContainer" containerID="226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5" Jan 26 11:40:50 crc kubenswrapper[4867]: E0126 11:40:50.149638 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5\": container with ID starting with 226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5 not found: ID does not exist" containerID="226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.149662 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5"} err="failed to get container status \"226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5\": rpc error: code = NotFound desc = could not find container \"226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5\": container with ID starting with 226d763f25fed2ac285088e56181e339e08e1c391bcef7d09f830c76c2110df5 not found: ID does not exist" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.150769 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.151675 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wbvp9" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.151922 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.152044 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.163306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.163513 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.163673 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-config-data\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.163763 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.163868 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0d380ac-2d87-4632-a7e3-d201296043f4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.164120 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fts7\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-kube-api-access-6fts7\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.164269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.164356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.164446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.164699 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.164847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0d380ac-2d87-4632-a7e3-d201296043f4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.167392 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fl6h\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-kube-api-access-9fl6h\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266507 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-config-data\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266647 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266791 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0d380ac-2d87-4632-a7e3-d201296043f4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266844 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fts7\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-kube-api-access-6fts7\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266960 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.266997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267020 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267062 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267110 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abd304f6-b024-40c9-86cb-94c9e9620ec0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267133 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267172 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267207 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0d380ac-2d87-4632-a7e3-d201296043f4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abd304f6-b024-40c9-86cb-94c9e9620ec0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267340 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267365 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.267626 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-config-data\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.268780 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.268836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.269091 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0d380ac-2d87-4632-a7e3-d201296043f4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.269189 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.269312 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.274162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0d380ac-2d87-4632-a7e3-d201296043f4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.274503 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.275127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.281476 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0d380ac-2d87-4632-a7e3-d201296043f4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.299876 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fts7\" (UniqueName: \"kubernetes.io/projected/d0d380ac-2d87-4632-a7e3-d201296043f4-kube-api-access-6fts7\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.303058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d0d380ac-2d87-4632-a7e3-d201296043f4\") " pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abd304f6-b024-40c9-86cb-94c9e9620ec0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abd304f6-b024-40c9-86cb-94c9e9620ec0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369789 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fl6h\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-kube-api-access-9fl6h\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369842 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369887 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.369972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.370024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.370056 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.370898 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.371648 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.371915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.372630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abd304f6-b024-40c9-86cb-94c9e9620ec0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.372855 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.373720 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.373832 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.373976 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abd304f6-b024-40c9-86cb-94c9e9620ec0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.374779 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abd304f6-b024-40c9-86cb-94c9e9620ec0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.379661 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abd304f6-b024-40c9-86cb-94c9e9620ec0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.392875 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fl6h\" (UniqueName: \"kubernetes.io/projected/abd304f6-b024-40c9-86cb-94c9e9620ec0-kube-api-access-9fl6h\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.409144 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.420951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"abd304f6-b024-40c9-86cb-94c9e9620ec0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.494000 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.607691 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e582495-d650-404c-9a13-d28ea98ecbc5" path="/var/lib/kubelet/pods/2e582495-d650-404c-9a13-d28ea98ecbc5/volumes" Jan 26 11:40:50 crc kubenswrapper[4867]: I0126 11:40:50.610419 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2bfda4-48fc-4d87-94ae-3b53adc90a3a" path="/var/lib/kubelet/pods/4d2bfda4-48fc-4d87-94ae-3b53adc90a3a/volumes" Jan 26 11:40:51 crc kubenswrapper[4867]: W0126 11:40:51.002900 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabd304f6_b024_40c9_86cb_94c9e9620ec0.slice/crio-4e3cf88d1ffebba4ff36ec3445299bf7575087560fd601ed43400445301bd718 WatchSource:0}: Error finding container 4e3cf88d1ffebba4ff36ec3445299bf7575087560fd601ed43400445301bd718: Status 404 returned error can't find the container with id 4e3cf88d1ffebba4ff36ec3445299bf7575087560fd601ed43400445301bd718 Jan 26 11:40:51 crc kubenswrapper[4867]: I0126 11:40:51.003036 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 11:40:51 crc kubenswrapper[4867]: W0126 11:40:51.010422 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0d380ac_2d87_4632_a7e3_d201296043f4.slice/crio-bb49252192f3df7a67379ef0b6c5ced22cdff5370c66f6b2688aeabd418c7c67 WatchSource:0}: Error finding container bb49252192f3df7a67379ef0b6c5ced22cdff5370c66f6b2688aeabd418c7c67: Status 404 returned error can't find the container with id bb49252192f3df7a67379ef0b6c5ced22cdff5370c66f6b2688aeabd418c7c67 Jan 26 11:40:51 crc kubenswrapper[4867]: I0126 11:40:51.013692 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 11:40:51 crc kubenswrapper[4867]: I0126 11:40:51.883472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d0d380ac-2d87-4632-a7e3-d201296043f4","Type":"ContainerStarted","Data":"bb49252192f3df7a67379ef0b6c5ced22cdff5370c66f6b2688aeabd418c7c67"} Jan 26 11:40:51 crc kubenswrapper[4867]: I0126 11:40:51.885878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"abd304f6-b024-40c9-86cb-94c9e9620ec0","Type":"ContainerStarted","Data":"4e3cf88d1ffebba4ff36ec3445299bf7575087560fd601ed43400445301bd718"} Jan 26 11:40:53 crc kubenswrapper[4867]: I0126 11:40:53.903600 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"abd304f6-b024-40c9-86cb-94c9e9620ec0","Type":"ContainerStarted","Data":"38f845ba7eb68206c77aca155de01fb334e71cb81a1110d5c8748c8caa6d8043"} Jan 26 11:40:53 crc kubenswrapper[4867]: I0126 11:40:53.906828 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d0d380ac-2d87-4632-a7e3-d201296043f4","Type":"ContainerStarted","Data":"06ef28a570a093d7ad42fe3737669836ebc6e3dd9e594f9161634930e36723ca"} Jan 26 11:41:25 crc kubenswrapper[4867]: I0126 11:41:25.213365 4867 generic.go:334] "Generic (PLEG): container finished" podID="d0d380ac-2d87-4632-a7e3-d201296043f4" containerID="06ef28a570a093d7ad42fe3737669836ebc6e3dd9e594f9161634930e36723ca" exitCode=0 Jan 26 11:41:25 crc kubenswrapper[4867]: I0126 11:41:25.213471 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d0d380ac-2d87-4632-a7e3-d201296043f4","Type":"ContainerDied","Data":"06ef28a570a093d7ad42fe3737669836ebc6e3dd9e594f9161634930e36723ca"} Jan 26 11:41:26 crc kubenswrapper[4867]: I0126 11:41:26.231440 4867 generic.go:334] "Generic (PLEG): container finished" podID="abd304f6-b024-40c9-86cb-94c9e9620ec0" containerID="38f845ba7eb68206c77aca155de01fb334e71cb81a1110d5c8748c8caa6d8043" exitCode=0 Jan 26 11:41:26 crc kubenswrapper[4867]: I0126 11:41:26.231524 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"abd304f6-b024-40c9-86cb-94c9e9620ec0","Type":"ContainerDied","Data":"38f845ba7eb68206c77aca155de01fb334e71cb81a1110d5c8748c8caa6d8043"} Jan 26 11:41:26 crc kubenswrapper[4867]: I0126 11:41:26.234077 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d0d380ac-2d87-4632-a7e3-d201296043f4","Type":"ContainerStarted","Data":"9f5f586d42ef131b9a15c171d39d459972b120b04f2290a20668be63c40d9be7"} Jan 26 11:41:26 crc kubenswrapper[4867]: I0126 11:41:26.234345 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 11:41:26 crc kubenswrapper[4867]: I0126 11:41:26.297058 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.297037133 podStartE2EDuration="37.297037133s" podCreationTimestamp="2026-01-26 11:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:41:26.291610296 +0000 UTC m=+1435.990185226" watchObservedRunningTime="2026-01-26 11:41:26.297037133 +0000 UTC m=+1435.995612043" Jan 26 11:41:27 crc kubenswrapper[4867]: I0126 11:41:27.245999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"abd304f6-b024-40c9-86cb-94c9e9620ec0","Type":"ContainerStarted","Data":"ae37e18913f6fa690a93d2e1733c77364f6f0fe7bb9a6467974879dbd594b410"} Jan 26 11:41:27 crc kubenswrapper[4867]: I0126 11:41:27.246573 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:41:27 crc kubenswrapper[4867]: I0126 11:41:27.281439 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.281418337 podStartE2EDuration="37.281418337s" podCreationTimestamp="2026-01-26 11:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:41:27.27893718 +0000 UTC m=+1436.977512090" watchObservedRunningTime="2026-01-26 11:41:27.281418337 +0000 UTC m=+1436.979993247" Jan 26 11:41:35 crc kubenswrapper[4867]: I0126 11:41:35.361040 4867 scope.go:117] "RemoveContainer" containerID="0312b2ee959eaeda7064c6618860ab72b58713089d8fbcd2480d377a4878c5f5" Jan 26 11:41:35 crc kubenswrapper[4867]: I0126 11:41:35.381577 4867 scope.go:117] "RemoveContainer" containerID="bee0e58e7762c264210bc440513c9fb59e5720253688e63c3676620a5247488f" Jan 26 11:41:35 crc kubenswrapper[4867]: I0126 11:41:35.428781 4867 scope.go:117] "RemoveContainer" containerID="00dc51d8974591c2fd4c381997a0123e34091de59dc8c82f4f251fa76faf00cc" Jan 26 11:41:40 crc kubenswrapper[4867]: I0126 11:41:40.411424 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 11:41:40 crc kubenswrapper[4867]: I0126 11:41:40.499436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 11:42:35 crc kubenswrapper[4867]: I0126 11:42:35.546842 4867 scope.go:117] "RemoveContainer" containerID="86e2e54b98e4fa4dd7606192db0e6276fe28d138b45eb93d066c11dec8040c34" Jan 26 11:42:35 crc kubenswrapper[4867]: I0126 11:42:35.588616 4867 scope.go:117] "RemoveContainer" containerID="a9c354102fc6d6247e89bfbae0426a7614397d890a101cbc42fa3d0240e344b0" Jan 26 11:42:35 crc kubenswrapper[4867]: I0126 11:42:35.630641 4867 scope.go:117] "RemoveContainer" containerID="7cf8a07d48202d0972c6df4a8e95b1455695a9693610e03b02c2887a1bd7b381" Jan 26 11:42:35 crc kubenswrapper[4867]: I0126 11:42:35.681741 4867 scope.go:117] "RemoveContainer" containerID="b4d3a8a212993515d3f2e795ce5bc67aab092824e8fb8d7763afdd105bfa535d" Jan 26 11:42:36 crc kubenswrapper[4867]: I0126 11:42:36.293505 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:42:36 crc kubenswrapper[4867]: I0126 11:42:36.293794 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:43:06 crc kubenswrapper[4867]: I0126 11:43:06.294353 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:43:06 crc kubenswrapper[4867]: I0126 11:43:06.295291 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:43:10 crc kubenswrapper[4867]: I0126 11:43:10.745898 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7k7h4"] Jan 26 11:43:10 crc kubenswrapper[4867]: I0126 11:43:10.749778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:10 crc kubenswrapper[4867]: I0126 11:43:10.760661 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7k7h4"] Jan 26 11:43:10 crc kubenswrapper[4867]: I0126 11:43:10.924577 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-catalog-content\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:10 crc kubenswrapper[4867]: I0126 11:43:10.924833 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4kr\" (UniqueName: \"kubernetes.io/projected/5816cd7b-5ff2-4e13-9408-df26ba85e13e-kube-api-access-9j4kr\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:10 crc kubenswrapper[4867]: I0126 11:43:10.924957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-utilities\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.027113 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-utilities\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.027288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-catalog-content\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.027329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4kr\" (UniqueName: \"kubernetes.io/projected/5816cd7b-5ff2-4e13-9408-df26ba85e13e-kube-api-access-9j4kr\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.027805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-catalog-content\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.027805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-utilities\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.046540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4kr\" (UniqueName: \"kubernetes.io/projected/5816cd7b-5ff2-4e13-9408-df26ba85e13e-kube-api-access-9j4kr\") pod \"community-operators-7k7h4\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.073802 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:11 crc kubenswrapper[4867]: I0126 11:43:11.590453 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7k7h4"] Jan 26 11:43:12 crc kubenswrapper[4867]: I0126 11:43:12.305241 4867 generic.go:334] "Generic (PLEG): container finished" podID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerID="5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b" exitCode=0 Jan 26 11:43:12 crc kubenswrapper[4867]: I0126 11:43:12.305331 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k7h4" event={"ID":"5816cd7b-5ff2-4e13-9408-df26ba85e13e","Type":"ContainerDied","Data":"5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b"} Jan 26 11:43:12 crc kubenswrapper[4867]: I0126 11:43:12.306434 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k7h4" event={"ID":"5816cd7b-5ff2-4e13-9408-df26ba85e13e","Type":"ContainerStarted","Data":"883a3f420a2161aad6d27ae61ae0c101076d86c13201769415613c1707c519f5"} Jan 26 11:43:14 crc kubenswrapper[4867]: I0126 11:43:14.323812 4867 generic.go:334] "Generic (PLEG): container finished" podID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerID="f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a" exitCode=0 Jan 26 11:43:14 crc kubenswrapper[4867]: I0126 11:43:14.324142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k7h4" event={"ID":"5816cd7b-5ff2-4e13-9408-df26ba85e13e","Type":"ContainerDied","Data":"f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a"} Jan 26 11:43:16 crc kubenswrapper[4867]: I0126 11:43:16.344819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k7h4" event={"ID":"5816cd7b-5ff2-4e13-9408-df26ba85e13e","Type":"ContainerStarted","Data":"524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d"} Jan 26 11:43:16 crc kubenswrapper[4867]: I0126 11:43:16.378412 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7k7h4" podStartSLOduration=2.76771611 podStartE2EDuration="6.378389334s" podCreationTimestamp="2026-01-26 11:43:10 +0000 UTC" firstStartedPulling="2026-01-26 11:43:12.307262005 +0000 UTC m=+1542.005836925" lastFinishedPulling="2026-01-26 11:43:15.917935239 +0000 UTC m=+1545.616510149" observedRunningTime="2026-01-26 11:43:16.365886303 +0000 UTC m=+1546.064461223" watchObservedRunningTime="2026-01-26 11:43:16.378389334 +0000 UTC m=+1546.076964244" Jan 26 11:43:21 crc kubenswrapper[4867]: I0126 11:43:21.074142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:21 crc kubenswrapper[4867]: I0126 11:43:21.074818 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:21 crc kubenswrapper[4867]: I0126 11:43:21.128698 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:21 crc kubenswrapper[4867]: I0126 11:43:21.437240 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:21 crc kubenswrapper[4867]: I0126 11:43:21.529102 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7k7h4"] Jan 26 11:43:23 crc kubenswrapper[4867]: I0126 11:43:23.408864 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7k7h4" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="registry-server" containerID="cri-o://524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d" gracePeriod=2 Jan 26 11:43:23 crc kubenswrapper[4867]: I0126 11:43:23.986761 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.083202 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-catalog-content\") pod \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.083342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4kr\" (UniqueName: \"kubernetes.io/projected/5816cd7b-5ff2-4e13-9408-df26ba85e13e-kube-api-access-9j4kr\") pod \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.083512 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-utilities\") pod \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\" (UID: \"5816cd7b-5ff2-4e13-9408-df26ba85e13e\") " Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.084343 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-utilities" (OuterVolumeSpecName: "utilities") pod "5816cd7b-5ff2-4e13-9408-df26ba85e13e" (UID: "5816cd7b-5ff2-4e13-9408-df26ba85e13e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.092398 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5816cd7b-5ff2-4e13-9408-df26ba85e13e-kube-api-access-9j4kr" (OuterVolumeSpecName: "kube-api-access-9j4kr") pod "5816cd7b-5ff2-4e13-9408-df26ba85e13e" (UID: "5816cd7b-5ff2-4e13-9408-df26ba85e13e"). InnerVolumeSpecName "kube-api-access-9j4kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.142504 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5816cd7b-5ff2-4e13-9408-df26ba85e13e" (UID: "5816cd7b-5ff2-4e13-9408-df26ba85e13e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.185322 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4kr\" (UniqueName: \"kubernetes.io/projected/5816cd7b-5ff2-4e13-9408-df26ba85e13e-kube-api-access-9j4kr\") on node \"crc\" DevicePath \"\"" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.185571 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.185581 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5816cd7b-5ff2-4e13-9408-df26ba85e13e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.422078 4867 generic.go:334] "Generic (PLEG): container finished" podID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerID="524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d" exitCode=0 Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.422128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k7h4" event={"ID":"5816cd7b-5ff2-4e13-9408-df26ba85e13e","Type":"ContainerDied","Data":"524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d"} Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.422139 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k7h4" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.422157 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k7h4" event={"ID":"5816cd7b-5ff2-4e13-9408-df26ba85e13e","Type":"ContainerDied","Data":"883a3f420a2161aad6d27ae61ae0c101076d86c13201769415613c1707c519f5"} Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.422182 4867 scope.go:117] "RemoveContainer" containerID="524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.466809 4867 scope.go:117] "RemoveContainer" containerID="f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.471516 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7k7h4"] Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.483622 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7k7h4"] Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.498630 4867 scope.go:117] "RemoveContainer" containerID="5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.539117 4867 scope.go:117] "RemoveContainer" containerID="524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d" Jan 26 11:43:24 crc kubenswrapper[4867]: E0126 11:43:24.539699 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d\": container with ID starting with 524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d not found: ID does not exist" containerID="524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.539747 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d"} err="failed to get container status \"524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d\": rpc error: code = NotFound desc = could not find container \"524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d\": container with ID starting with 524b729b614b13eb6fffa7c003a25f38fef5ae3909d5352d8a359d30570aec0d not found: ID does not exist" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.539780 4867 scope.go:117] "RemoveContainer" containerID="f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a" Jan 26 11:43:24 crc kubenswrapper[4867]: E0126 11:43:24.540943 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a\": container with ID starting with f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a not found: ID does not exist" containerID="f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.540998 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a"} err="failed to get container status \"f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a\": rpc error: code = NotFound desc = could not find container \"f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a\": container with ID starting with f98ccef2e713bd60219383d66cfe10c25777bcc019a5443bc3bc0ae7e016dd9a not found: ID does not exist" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.541023 4867 scope.go:117] "RemoveContainer" containerID="5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b" Jan 26 11:43:24 crc kubenswrapper[4867]: E0126 11:43:24.541395 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b\": container with ID starting with 5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b not found: ID does not exist" containerID="5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.541415 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b"} err="failed to get container status \"5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b\": rpc error: code = NotFound desc = could not find container \"5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b\": container with ID starting with 5461558edc3f2843bf7d43e1650111c5b6184b9c54dd9bf50dd1353d3232b22b not found: ID does not exist" Jan 26 11:43:24 crc kubenswrapper[4867]: I0126 11:43:24.575359 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" path="/var/lib/kubelet/pods/5816cd7b-5ff2-4e13-9408-df26ba85e13e/volumes" Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.294686 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.295417 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.295497 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.296890 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.297082 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" gracePeriod=600 Jan 26 11:43:36 crc kubenswrapper[4867]: E0126 11:43:36.418647 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.526753 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" exitCode=0 Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.526811 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a"} Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.526858 4867 scope.go:117] "RemoveContainer" containerID="6bb9fd5acba776380a6fa3e3d00855cfc048bc467ccbd9a88cd7ca74eccbe67f" Jan 26 11:43:36 crc kubenswrapper[4867]: I0126 11:43:36.527845 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:43:36 crc kubenswrapper[4867]: E0126 11:43:36.528159 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.222853 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wwjbj"] Jan 26 11:43:41 crc kubenswrapper[4867]: E0126 11:43:41.223913 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="registry-server" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.223931 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="registry-server" Jan 26 11:43:41 crc kubenswrapper[4867]: E0126 11:43:41.223947 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="extract-content" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.223955 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="extract-content" Jan 26 11:43:41 crc kubenswrapper[4867]: E0126 11:43:41.223995 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="extract-utilities" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.224004 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="extract-utilities" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.224382 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5816cd7b-5ff2-4e13-9408-df26ba85e13e" containerName="registry-server" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.226104 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.242063 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwjbj"] Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.304618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-catalog-content\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.304684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wqp\" (UniqueName: \"kubernetes.io/projected/762e0b20-3b92-456e-b209-92aec95b1fdb-kube-api-access-42wqp\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.305054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-utilities\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.406730 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-utilities\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.406849 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-catalog-content\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.406883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42wqp\" (UniqueName: \"kubernetes.io/projected/762e0b20-3b92-456e-b209-92aec95b1fdb-kube-api-access-42wqp\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.407413 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-utilities\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.407426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-catalog-content\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.438395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42wqp\" (UniqueName: \"kubernetes.io/projected/762e0b20-3b92-456e-b209-92aec95b1fdb-kube-api-access-42wqp\") pod \"certified-operators-wwjbj\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:41 crc kubenswrapper[4867]: I0126 11:43:41.545807 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:42 crc kubenswrapper[4867]: I0126 11:43:42.249156 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwjbj"] Jan 26 11:43:42 crc kubenswrapper[4867]: W0126 11:43:42.249347 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod762e0b20_3b92_456e_b209_92aec95b1fdb.slice/crio-981dbbe5ee9a61f9df5d51ef9aa50975d3d83fc9936ba4a856d1cd15bd5bc31d WatchSource:0}: Error finding container 981dbbe5ee9a61f9df5d51ef9aa50975d3d83fc9936ba4a856d1cd15bd5bc31d: Status 404 returned error can't find the container with id 981dbbe5ee9a61f9df5d51ef9aa50975d3d83fc9936ba4a856d1cd15bd5bc31d Jan 26 11:43:42 crc kubenswrapper[4867]: I0126 11:43:42.612556 4867 generic.go:334] "Generic (PLEG): container finished" podID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerID="be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3" exitCode=0 Jan 26 11:43:42 crc kubenswrapper[4867]: I0126 11:43:42.612625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwjbj" event={"ID":"762e0b20-3b92-456e-b209-92aec95b1fdb","Type":"ContainerDied","Data":"be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3"} Jan 26 11:43:42 crc kubenswrapper[4867]: I0126 11:43:42.613170 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwjbj" event={"ID":"762e0b20-3b92-456e-b209-92aec95b1fdb","Type":"ContainerStarted","Data":"981dbbe5ee9a61f9df5d51ef9aa50975d3d83fc9936ba4a856d1cd15bd5bc31d"} Jan 26 11:43:42 crc kubenswrapper[4867]: I0126 11:43:42.614343 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:43:43 crc kubenswrapper[4867]: I0126 11:43:43.623831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwjbj" event={"ID":"762e0b20-3b92-456e-b209-92aec95b1fdb","Type":"ContainerStarted","Data":"95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6"} Jan 26 11:43:44 crc kubenswrapper[4867]: I0126 11:43:44.638678 4867 generic.go:334] "Generic (PLEG): container finished" podID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerID="95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6" exitCode=0 Jan 26 11:43:44 crc kubenswrapper[4867]: I0126 11:43:44.638741 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwjbj" event={"ID":"762e0b20-3b92-456e-b209-92aec95b1fdb","Type":"ContainerDied","Data":"95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6"} Jan 26 11:43:45 crc kubenswrapper[4867]: I0126 11:43:45.658151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwjbj" event={"ID":"762e0b20-3b92-456e-b209-92aec95b1fdb","Type":"ContainerStarted","Data":"60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90"} Jan 26 11:43:45 crc kubenswrapper[4867]: I0126 11:43:45.679423 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wwjbj" podStartSLOduration=2.271502494 podStartE2EDuration="4.679400322s" podCreationTimestamp="2026-01-26 11:43:41 +0000 UTC" firstStartedPulling="2026-01-26 11:43:42.614133919 +0000 UTC m=+1572.312708819" lastFinishedPulling="2026-01-26 11:43:45.022031737 +0000 UTC m=+1574.720606647" observedRunningTime="2026-01-26 11:43:45.675434944 +0000 UTC m=+1575.374009854" watchObservedRunningTime="2026-01-26 11:43:45.679400322 +0000 UTC m=+1575.377975242" Jan 26 11:43:47 crc kubenswrapper[4867]: I0126 11:43:47.564360 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:43:47 crc kubenswrapper[4867]: E0126 11:43:47.564930 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.751591 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5sn6"] Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.754580 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.767498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5sn6"] Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.810386 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-catalog-content\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.810480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-utilities\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.810530 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5j2\" (UniqueName: \"kubernetes.io/projected/37c07083-b40f-4f50-9b10-2f47f77b1f3e-kube-api-access-qm5j2\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.912395 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-utilities\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.912483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5j2\" (UniqueName: \"kubernetes.io/projected/37c07083-b40f-4f50-9b10-2f47f77b1f3e-kube-api-access-qm5j2\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.912584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-catalog-content\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.913062 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-catalog-content\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.913065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-utilities\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:50 crc kubenswrapper[4867]: I0126 11:43:50.935265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5j2\" (UniqueName: \"kubernetes.io/projected/37c07083-b40f-4f50-9b10-2f47f77b1f3e-kube-api-access-qm5j2\") pod \"redhat-marketplace-s5sn6\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:51 crc kubenswrapper[4867]: I0126 11:43:51.083267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:43:51 crc kubenswrapper[4867]: I0126 11:43:51.573128 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:51 crc kubenswrapper[4867]: I0126 11:43:51.573654 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:51 crc kubenswrapper[4867]: I0126 11:43:51.627998 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5sn6"] Jan 26 11:43:51 crc kubenswrapper[4867]: I0126 11:43:51.634580 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:51 crc kubenswrapper[4867]: I0126 11:43:51.715465 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5sn6" event={"ID":"37c07083-b40f-4f50-9b10-2f47f77b1f3e","Type":"ContainerStarted","Data":"0b552607a856d8b76763c96437c7de2607cbf62da0c978029d70c3b12b431863"} Jan 26 11:43:51 crc kubenswrapper[4867]: I0126 11:43:51.772989 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:52 crc kubenswrapper[4867]: I0126 11:43:52.733732 4867 generic.go:334] "Generic (PLEG): container finished" podID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerID="bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819" exitCode=0 Jan 26 11:43:52 crc kubenswrapper[4867]: I0126 11:43:52.733846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5sn6" event={"ID":"37c07083-b40f-4f50-9b10-2f47f77b1f3e","Type":"ContainerDied","Data":"bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819"} Jan 26 11:43:53 crc kubenswrapper[4867]: I0126 11:43:53.746039 4867 generic.go:334] "Generic (PLEG): container finished" podID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerID="0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5" exitCode=0 Jan 26 11:43:53 crc kubenswrapper[4867]: I0126 11:43:53.746123 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5sn6" event={"ID":"37c07083-b40f-4f50-9b10-2f47f77b1f3e","Type":"ContainerDied","Data":"0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5"} Jan 26 11:43:53 crc kubenswrapper[4867]: I0126 11:43:53.926438 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwjbj"] Jan 26 11:43:54 crc kubenswrapper[4867]: I0126 11:43:54.758255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5sn6" event={"ID":"37c07083-b40f-4f50-9b10-2f47f77b1f3e","Type":"ContainerStarted","Data":"2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73"} Jan 26 11:43:54 crc kubenswrapper[4867]: I0126 11:43:54.758429 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wwjbj" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="registry-server" containerID="cri-o://60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90" gracePeriod=2 Jan 26 11:43:54 crc kubenswrapper[4867]: I0126 11:43:54.795148 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5sn6" podStartSLOduration=3.409710134 podStartE2EDuration="4.795129405s" podCreationTimestamp="2026-01-26 11:43:50 +0000 UTC" firstStartedPulling="2026-01-26 11:43:52.736204601 +0000 UTC m=+1582.434779501" lastFinishedPulling="2026-01-26 11:43:54.121623872 +0000 UTC m=+1583.820198772" observedRunningTime="2026-01-26 11:43:54.785161795 +0000 UTC m=+1584.483736705" watchObservedRunningTime="2026-01-26 11:43:54.795129405 +0000 UTC m=+1584.493704305" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.278550 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.406008 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42wqp\" (UniqueName: \"kubernetes.io/projected/762e0b20-3b92-456e-b209-92aec95b1fdb-kube-api-access-42wqp\") pod \"762e0b20-3b92-456e-b209-92aec95b1fdb\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.406121 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-utilities\") pod \"762e0b20-3b92-456e-b209-92aec95b1fdb\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.406189 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-catalog-content\") pod \"762e0b20-3b92-456e-b209-92aec95b1fdb\" (UID: \"762e0b20-3b92-456e-b209-92aec95b1fdb\") " Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.406793 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-utilities" (OuterVolumeSpecName: "utilities") pod "762e0b20-3b92-456e-b209-92aec95b1fdb" (UID: "762e0b20-3b92-456e-b209-92aec95b1fdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.407113 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.411822 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762e0b20-3b92-456e-b209-92aec95b1fdb-kube-api-access-42wqp" (OuterVolumeSpecName: "kube-api-access-42wqp") pod "762e0b20-3b92-456e-b209-92aec95b1fdb" (UID: "762e0b20-3b92-456e-b209-92aec95b1fdb"). InnerVolumeSpecName "kube-api-access-42wqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.465033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "762e0b20-3b92-456e-b209-92aec95b1fdb" (UID: "762e0b20-3b92-456e-b209-92aec95b1fdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.509051 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42wqp\" (UniqueName: \"kubernetes.io/projected/762e0b20-3b92-456e-b209-92aec95b1fdb-kube-api-access-42wqp\") on node \"crc\" DevicePath \"\"" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.509103 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e0b20-3b92-456e-b209-92aec95b1fdb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.773284 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwjbj" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.773265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwjbj" event={"ID":"762e0b20-3b92-456e-b209-92aec95b1fdb","Type":"ContainerDied","Data":"60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90"} Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.773743 4867 scope.go:117] "RemoveContainer" containerID="60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.773172 4867 generic.go:334] "Generic (PLEG): container finished" podID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerID="60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90" exitCode=0 Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.773919 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwjbj" event={"ID":"762e0b20-3b92-456e-b209-92aec95b1fdb","Type":"ContainerDied","Data":"981dbbe5ee9a61f9df5d51ef9aa50975d3d83fc9936ba4a856d1cd15bd5bc31d"} Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.801304 4867 scope.go:117] "RemoveContainer" containerID="95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.821695 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwjbj"] Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.843944 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wwjbj"] Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.845371 4867 scope.go:117] "RemoveContainer" containerID="be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.907548 4867 scope.go:117] "RemoveContainer" containerID="60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90" Jan 26 11:43:55 crc kubenswrapper[4867]: E0126 11:43:55.908957 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90\": container with ID starting with 60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90 not found: ID does not exist" containerID="60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.909119 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90"} err="failed to get container status \"60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90\": rpc error: code = NotFound desc = could not find container \"60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90\": container with ID starting with 60f3f5fea6b03b96bdba733c2ddd8d1b048734c974a0adb2e22e67ceeef5bb90 not found: ID does not exist" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.909273 4867 scope.go:117] "RemoveContainer" containerID="95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6" Jan 26 11:43:55 crc kubenswrapper[4867]: E0126 11:43:55.909766 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6\": container with ID starting with 95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6 not found: ID does not exist" containerID="95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.909818 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6"} err="failed to get container status \"95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6\": rpc error: code = NotFound desc = could not find container \"95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6\": container with ID starting with 95619ca606784b2ee6173ea77b406ddb9c8a802adee358c1e6c4941963e5e9d6 not found: ID does not exist" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.909845 4867 scope.go:117] "RemoveContainer" containerID="be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3" Jan 26 11:43:55 crc kubenswrapper[4867]: E0126 11:43:55.910140 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3\": container with ID starting with be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3 not found: ID does not exist" containerID="be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3" Jan 26 11:43:55 crc kubenswrapper[4867]: I0126 11:43:55.910318 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3"} err="failed to get container status \"be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3\": rpc error: code = NotFound desc = could not find container \"be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3\": container with ID starting with be935c6597ee760e3e91350c2833b86186392b3c7a463d9d6c1e9584d5d929f3 not found: ID does not exist" Jan 26 11:43:56 crc kubenswrapper[4867]: I0126 11:43:56.581306 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" path="/var/lib/kubelet/pods/762e0b20-3b92-456e-b209-92aec95b1fdb/volumes" Jan 26 11:43:58 crc kubenswrapper[4867]: I0126 11:43:58.564001 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:43:58 crc kubenswrapper[4867]: E0126 11:43:58.564768 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:44:01 crc kubenswrapper[4867]: I0126 11:44:01.085030 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:44:01 crc kubenswrapper[4867]: I0126 11:44:01.085437 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:44:01 crc kubenswrapper[4867]: I0126 11:44:01.131944 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:44:01 crc kubenswrapper[4867]: I0126 11:44:01.873143 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:44:01 crc kubenswrapper[4867]: I0126 11:44:01.927458 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5sn6"] Jan 26 11:44:03 crc kubenswrapper[4867]: I0126 11:44:03.848786 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5sn6" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="registry-server" containerID="cri-o://2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73" gracePeriod=2 Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.325254 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.511007 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-catalog-content\") pod \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.511181 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5j2\" (UniqueName: \"kubernetes.io/projected/37c07083-b40f-4f50-9b10-2f47f77b1f3e-kube-api-access-qm5j2\") pod \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.511214 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-utilities\") pod \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\" (UID: \"37c07083-b40f-4f50-9b10-2f47f77b1f3e\") " Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.513797 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-utilities" (OuterVolumeSpecName: "utilities") pod "37c07083-b40f-4f50-9b10-2f47f77b1f3e" (UID: "37c07083-b40f-4f50-9b10-2f47f77b1f3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.534817 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c07083-b40f-4f50-9b10-2f47f77b1f3e-kube-api-access-qm5j2" (OuterVolumeSpecName: "kube-api-access-qm5j2") pod "37c07083-b40f-4f50-9b10-2f47f77b1f3e" (UID: "37c07083-b40f-4f50-9b10-2f47f77b1f3e"). InnerVolumeSpecName "kube-api-access-qm5j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.548049 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37c07083-b40f-4f50-9b10-2f47f77b1f3e" (UID: "37c07083-b40f-4f50-9b10-2f47f77b1f3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.613317 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.613358 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5j2\" (UniqueName: \"kubernetes.io/projected/37c07083-b40f-4f50-9b10-2f47f77b1f3e-kube-api-access-qm5j2\") on node \"crc\" DevicePath \"\"" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.613369 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c07083-b40f-4f50-9b10-2f47f77b1f3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.860023 4867 generic.go:334] "Generic (PLEG): container finished" podID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerID="2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73" exitCode=0 Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.860071 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5sn6" event={"ID":"37c07083-b40f-4f50-9b10-2f47f77b1f3e","Type":"ContainerDied","Data":"2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73"} Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.860101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5sn6" event={"ID":"37c07083-b40f-4f50-9b10-2f47f77b1f3e","Type":"ContainerDied","Data":"0b552607a856d8b76763c96437c7de2607cbf62da0c978029d70c3b12b431863"} Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.860105 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5sn6" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.860120 4867 scope.go:117] "RemoveContainer" containerID="2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.892068 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5sn6"] Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.897424 4867 scope.go:117] "RemoveContainer" containerID="0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.900779 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5sn6"] Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.919429 4867 scope.go:117] "RemoveContainer" containerID="bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.959746 4867 scope.go:117] "RemoveContainer" containerID="2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73" Jan 26 11:44:04 crc kubenswrapper[4867]: E0126 11:44:04.960264 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73\": container with ID starting with 2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73 not found: ID does not exist" containerID="2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.960303 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73"} err="failed to get container status \"2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73\": rpc error: code = NotFound desc = could not find container \"2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73\": container with ID starting with 2ea9a92ad906e701c0782023c06240d5c7ab62cf2c23a6a9da7b785fc0c42f73 not found: ID does not exist" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.960332 4867 scope.go:117] "RemoveContainer" containerID="0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5" Jan 26 11:44:04 crc kubenswrapper[4867]: E0126 11:44:04.961669 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5\": container with ID starting with 0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5 not found: ID does not exist" containerID="0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.961782 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5"} err="failed to get container status \"0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5\": rpc error: code = NotFound desc = could not find container \"0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5\": container with ID starting with 0ba1e12b78b78de569cf15ee95fb3119f3515fcfa5c62de0b7796c626fe4afa5 not found: ID does not exist" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.961873 4867 scope.go:117] "RemoveContainer" containerID="bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819" Jan 26 11:44:04 crc kubenswrapper[4867]: E0126 11:44:04.962385 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819\": container with ID starting with bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819 not found: ID does not exist" containerID="bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819" Jan 26 11:44:04 crc kubenswrapper[4867]: I0126 11:44:04.962408 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819"} err="failed to get container status \"bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819\": rpc error: code = NotFound desc = could not find container \"bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819\": container with ID starting with bacd2f2f883187676ab7652c24675266820968b9836efab01139b0c914ef7819 not found: ID does not exist" Jan 26 11:44:06 crc kubenswrapper[4867]: I0126 11:44:06.575912 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" path="/var/lib/kubelet/pods/37c07083-b40f-4f50-9b10-2f47f77b1f3e/volumes" Jan 26 11:44:09 crc kubenswrapper[4867]: I0126 11:44:09.563991 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:44:09 crc kubenswrapper[4867]: E0126 11:44:09.564861 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:44:22 crc kubenswrapper[4867]: I0126 11:44:22.563666 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:44:22 crc kubenswrapper[4867]: E0126 11:44:22.565283 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:44:33 crc kubenswrapper[4867]: I0126 11:44:33.565153 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:44:33 crc kubenswrapper[4867]: E0126 11:44:33.565946 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:44:46 crc kubenswrapper[4867]: I0126 11:44:46.564140 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:44:46 crc kubenswrapper[4867]: E0126 11:44:46.566215 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:44:58 crc kubenswrapper[4867]: I0126 11:44:58.564367 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:44:58 crc kubenswrapper[4867]: E0126 11:44:58.565319 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.147969 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96"] Jan 26 11:45:00 crc kubenswrapper[4867]: E0126 11:45:00.148662 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="registry-server" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148677 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="registry-server" Jan 26 11:45:00 crc kubenswrapper[4867]: E0126 11:45:00.148689 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="extract-content" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148696 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="extract-content" Jan 26 11:45:00 crc kubenswrapper[4867]: E0126 11:45:00.148710 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="registry-server" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148721 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="registry-server" Jan 26 11:45:00 crc kubenswrapper[4867]: E0126 11:45:00.148734 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="extract-content" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148741 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="extract-content" Jan 26 11:45:00 crc kubenswrapper[4867]: E0126 11:45:00.148763 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="extract-utilities" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148771 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="extract-utilities" Jan 26 11:45:00 crc kubenswrapper[4867]: E0126 11:45:00.148791 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="extract-utilities" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148798 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="extract-utilities" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148960 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c07083-b40f-4f50-9b10-2f47f77b1f3e" containerName="registry-server" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.148973 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="762e0b20-3b92-456e-b209-92aec95b1fdb" containerName="registry-server" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.149568 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.151898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.152996 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.166199 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96"] Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.198552 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcbpp\" (UniqueName: \"kubernetes.io/projected/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-kube-api-access-xcbpp\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.198694 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-config-volume\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.198732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-secret-volume\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.301692 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcbpp\" (UniqueName: \"kubernetes.io/projected/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-kube-api-access-xcbpp\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.303022 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-config-volume\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.303169 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-secret-volume\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.303915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-config-volume\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.309084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-secret-volume\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.322385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcbpp\" (UniqueName: \"kubernetes.io/projected/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-kube-api-access-xcbpp\") pod \"collect-profiles-29490465-4jd96\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.482089 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:00 crc kubenswrapper[4867]: I0126 11:45:00.974563 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96"] Jan 26 11:45:01 crc kubenswrapper[4867]: I0126 11:45:01.446752 4867 generic.go:334] "Generic (PLEG): container finished" podID="1387e72e-bdcc-4deb-94fe-8cd4f8d0707b" containerID="54693602bc11ea16cef3b499e1bfc002055494fdc0b81d7ca5b31bb92a3638cc" exitCode=0 Jan 26 11:45:01 crc kubenswrapper[4867]: I0126 11:45:01.446807 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" event={"ID":"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b","Type":"ContainerDied","Data":"54693602bc11ea16cef3b499e1bfc002055494fdc0b81d7ca5b31bb92a3638cc"} Jan 26 11:45:01 crc kubenswrapper[4867]: I0126 11:45:01.446840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" event={"ID":"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b","Type":"ContainerStarted","Data":"df6a0fe0f7edb1321724eee306d6b0dd08e9bca68cf40156dd2b677edae6d640"} Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.840417 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.952991 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcbpp\" (UniqueName: \"kubernetes.io/projected/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-kube-api-access-xcbpp\") pod \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.953047 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-secret-volume\") pod \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.953183 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-config-volume\") pod \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\" (UID: \"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b\") " Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.954184 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-config-volume" (OuterVolumeSpecName: "config-volume") pod "1387e72e-bdcc-4deb-94fe-8cd4f8d0707b" (UID: "1387e72e-bdcc-4deb-94fe-8cd4f8d0707b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.954776 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.959053 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1387e72e-bdcc-4deb-94fe-8cd4f8d0707b" (UID: "1387e72e-bdcc-4deb-94fe-8cd4f8d0707b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 11:45:02 crc kubenswrapper[4867]: I0126 11:45:02.966412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-kube-api-access-xcbpp" (OuterVolumeSpecName: "kube-api-access-xcbpp") pod "1387e72e-bdcc-4deb-94fe-8cd4f8d0707b" (UID: "1387e72e-bdcc-4deb-94fe-8cd4f8d0707b"). InnerVolumeSpecName "kube-api-access-xcbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:45:03 crc kubenswrapper[4867]: I0126 11:45:03.056819 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcbpp\" (UniqueName: \"kubernetes.io/projected/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-kube-api-access-xcbpp\") on node \"crc\" DevicePath \"\"" Jan 26 11:45:03 crc kubenswrapper[4867]: I0126 11:45:03.056887 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1387e72e-bdcc-4deb-94fe-8cd4f8d0707b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 11:45:03 crc kubenswrapper[4867]: I0126 11:45:03.464942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" event={"ID":"1387e72e-bdcc-4deb-94fe-8cd4f8d0707b","Type":"ContainerDied","Data":"df6a0fe0f7edb1321724eee306d6b0dd08e9bca68cf40156dd2b677edae6d640"} Jan 26 11:45:03 crc kubenswrapper[4867]: I0126 11:45:03.464984 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6a0fe0f7edb1321724eee306d6b0dd08e9bca68cf40156dd2b677edae6d640" Jan 26 11:45:03 crc kubenswrapper[4867]: I0126 11:45:03.464999 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490465-4jd96" Jan 26 11:45:12 crc kubenswrapper[4867]: I0126 11:45:12.564530 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:45:12 crc kubenswrapper[4867]: E0126 11:45:12.565182 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:45:23 crc kubenswrapper[4867]: I0126 11:45:23.564674 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:45:23 crc kubenswrapper[4867]: E0126 11:45:23.565465 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:45:33 crc kubenswrapper[4867]: I0126 11:45:33.043034 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bs7ks"] Jan 26 11:45:33 crc kubenswrapper[4867]: I0126 11:45:33.057881 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-14ec-account-create-update-2wrkd"] Jan 26 11:45:33 crc kubenswrapper[4867]: I0126 11:45:33.071348 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-14ec-account-create-update-2wrkd"] Jan 26 11:45:33 crc kubenswrapper[4867]: I0126 11:45:33.081098 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bs7ks"] Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.035734 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2xl9x"] Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.046352 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d8fb-account-create-update-fpsgc"] Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.056692 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-51a0-account-create-update-lcjf9"] Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.064708 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2xl9x"] Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.073576 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d8fb-account-create-update-fpsgc"] Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.084861 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-51a0-account-create-update-lcjf9"] Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.579202 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad2b2c0-428a-4a2b-943d-91966c6f7403" path="/var/lib/kubelet/pods/4ad2b2c0-428a-4a2b-943d-91966c6f7403/volumes" Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.580082 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee2993e-e4e2-4fda-8506-4af3ea92108f" path="/var/lib/kubelet/pods/6ee2993e-e4e2-4fda-8506-4af3ea92108f/volumes" Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.580704 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec4f0ae5-3541-4224-8693-6264be64156e" path="/var/lib/kubelet/pods/ec4f0ae5-3541-4224-8693-6264be64156e/volumes" Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.581370 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede5a15e-c616-482a-8f65-dcc40b72bac9" path="/var/lib/kubelet/pods/ede5a15e-c616-482a-8f65-dcc40b72bac9/volumes" Jan 26 11:45:34 crc kubenswrapper[4867]: I0126 11:45:34.582429 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0055f8a-079d-477c-9dab-f6e66fc7e0a0" path="/var/lib/kubelet/pods/f0055f8a-079d-477c-9dab-f6e66fc7e0a0/volumes" Jan 26 11:45:35 crc kubenswrapper[4867]: I0126 11:45:35.029263 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-z9jck"] Jan 26 11:45:35 crc kubenswrapper[4867]: I0126 11:45:35.038116 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-z9jck"] Jan 26 11:45:35 crc kubenswrapper[4867]: I0126 11:45:35.953864 4867 scope.go:117] "RemoveContainer" containerID="c588748677466f817d168dd03f898c35a8f725264c35c933aeaf9f4a99c81581" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.011179 4867 scope.go:117] "RemoveContainer" containerID="c1ce25549a1890d533f5f84e2e14e250cf17bbb49b24069efc482c99cf8a8848" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.065435 4867 scope.go:117] "RemoveContainer" containerID="3ba80acc3944b4e27244b17fb59ab24e24c863594ad63bfbedf299c9d6f3a96c" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.097859 4867 scope.go:117] "RemoveContainer" containerID="1e81ff7533ca607742db210aec7eb45b8e33e5cab8356d9d345ebd5169122d0d" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.144415 4867 scope.go:117] "RemoveContainer" containerID="3548d75bb02b2a13831b8d71faf98a958d9495ba26eb852b0f7a6b17f6e7b2b8" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.187591 4867 scope.go:117] "RemoveContainer" containerID="b19d29259d1895082d0636b1e7ad3f5bdd994ce4b61afc26719e6512963cf847" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.222538 4867 scope.go:117] "RemoveContainer" containerID="5950f4325f46b7dfe43a0cf86c40c65704caaafa5dd9b340333c80fa44b9bbc1" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.564201 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:45:36 crc kubenswrapper[4867]: E0126 11:45:36.564518 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:45:36 crc kubenswrapper[4867]: I0126 11:45:36.574936 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90c2ed7-4485-455b-bba2-42014178d9be" path="/var/lib/kubelet/pods/c90c2ed7-4485-455b-bba2-42014178d9be/volumes" Jan 26 11:45:51 crc kubenswrapper[4867]: I0126 11:45:51.563471 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:45:51 crc kubenswrapper[4867]: E0126 11:45:51.564235 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:45:56 crc kubenswrapper[4867]: I0126 11:45:56.049659 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-252qd"] Jan 26 11:45:56 crc kubenswrapper[4867]: I0126 11:45:56.066281 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-252qd"] Jan 26 11:45:56 crc kubenswrapper[4867]: I0126 11:45:56.078091 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0f8e-account-create-update-plnlx"] Jan 26 11:45:56 crc kubenswrapper[4867]: I0126 11:45:56.090008 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0f8e-account-create-update-plnlx"] Jan 26 11:45:56 crc kubenswrapper[4867]: I0126 11:45:56.579378 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0504e2f3-0d4b-46cc-847b-497423d48fcc" path="/var/lib/kubelet/pods/0504e2f3-0d4b-46cc-847b-497423d48fcc/volumes" Jan 26 11:45:56 crc kubenswrapper[4867]: I0126 11:45:56.580649 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5d8576-e5a4-4afe-b859-3f199ca48359" path="/var/lib/kubelet/pods/dd5d8576-e5a4-4afe-b859-3f199ca48359/volumes" Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.029048 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c2b0-account-create-update-ljmf8"] Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.038852 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ba1b-account-create-update-w22mk"] Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.047954 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5d6gw"] Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.055969 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ba1b-account-create-update-w22mk"] Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.063492 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5d6gw"] Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.071321 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c2b0-account-create-update-ljmf8"] Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.078829 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zh4k7"] Jan 26 11:45:59 crc kubenswrapper[4867]: I0126 11:45:59.090504 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zh4k7"] Jan 26 11:46:00 crc kubenswrapper[4867]: I0126 11:46:00.572766 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255d1723-a5b7-4030-b2a0-4b28ee758717" path="/var/lib/kubelet/pods/255d1723-a5b7-4030-b2a0-4b28ee758717/volumes" Jan 26 11:46:00 crc kubenswrapper[4867]: I0126 11:46:00.574558 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1ea464-c670-4943-8788-7718c1ebffa2" path="/var/lib/kubelet/pods/3e1ea464-c670-4943-8788-7718c1ebffa2/volumes" Jan 26 11:46:00 crc kubenswrapper[4867]: I0126 11:46:00.575133 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad3fda5-db71-4cad-b88c-ca0665f64b9d" path="/var/lib/kubelet/pods/5ad3fda5-db71-4cad-b88c-ca0665f64b9d/volumes" Jan 26 11:46:00 crc kubenswrapper[4867]: I0126 11:46:00.575731 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d7d8be-7aac-4f1c-95a7-25021c4d24ae" path="/var/lib/kubelet/pods/e2d7d8be-7aac-4f1c-95a7-25021c4d24ae/volumes" Jan 26 11:46:06 crc kubenswrapper[4867]: I0126 11:46:06.569105 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:46:06 crc kubenswrapper[4867]: E0126 11:46:06.569870 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:46:07 crc kubenswrapper[4867]: I0126 11:46:07.027937 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-z9wf6"] Jan 26 11:46:07 crc kubenswrapper[4867]: I0126 11:46:07.041579 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-z9wf6"] Jan 26 11:46:08 crc kubenswrapper[4867]: I0126 11:46:08.574209 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054c5880-216a-4d98-bbc3-bc428d09bfe8" path="/var/lib/kubelet/pods/054c5880-216a-4d98-bbc3-bc428d09bfe8/volumes" Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.043411 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l4jjx"] Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.056405 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-de23-account-create-update-tsmbv"] Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.064047 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-n8k7h"] Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.075284 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l4jjx"] Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.109490 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-n8k7h"] Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.112274 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-de23-account-create-update-tsmbv"] Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.584034 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829cd764-d506-45fb-a1d6-d45504d0b20c" path="/var/lib/kubelet/pods/829cd764-d506-45fb-a1d6-d45504d0b20c/volumes" Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.585578 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933e31ea-ff1a-4883-a82a-92893ca7d7b0" path="/var/lib/kubelet/pods/933e31ea-ff1a-4883-a82a-92893ca7d7b0/volumes" Jan 26 11:46:14 crc kubenswrapper[4867]: I0126 11:46:14.587042 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb" path="/var/lib/kubelet/pods/e68c9cf2-da73-41d8-bc9a-5fd8df2c1ceb/volumes" Jan 26 11:46:18 crc kubenswrapper[4867]: I0126 11:46:18.564295 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:46:18 crc kubenswrapper[4867]: E0126 11:46:18.565960 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:46:31 crc kubenswrapper[4867]: I0126 11:46:31.564978 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:46:31 crc kubenswrapper[4867]: E0126 11:46:31.565848 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.380030 4867 scope.go:117] "RemoveContainer" containerID="c896b039eb30d5f99f9be2d1a482f83cac315acf2ac8a2ce7d39bd928229b476" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.412375 4867 scope.go:117] "RemoveContainer" containerID="25d1627cd2a644c9c81616c499be120f39117bea26b0a7b01c0cad6271dbd577" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.458300 4867 scope.go:117] "RemoveContainer" containerID="fe0c26349fc460e9b1940b8eaf3b6046d26a2164ca191977a1cce6e1f42a7419" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.498617 4867 scope.go:117] "RemoveContainer" containerID="5bb175e7fb2a6d045398e1893e32decd8e178b7615ca1824a71b342b08fbd2a2" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.539346 4867 scope.go:117] "RemoveContainer" containerID="ed7ca6535de4bc6499acd3d01a20485ae06c236f4390eadaedc3a070ccae33a0" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.606448 4867 scope.go:117] "RemoveContainer" containerID="0fd2f7fc2036f615e2928d0ebb3b35d09523786b8b0ae31c25c5b1dfb29b3981" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.650297 4867 scope.go:117] "RemoveContainer" containerID="0252fce5ead19c0f5e16679f900c227b753cc720eab192b88836c3211860171c" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.679370 4867 scope.go:117] "RemoveContainer" containerID="9b827a2c6ea65163706cdf8f6e73946db57fcbc1f06dc7a5e0eba5084e7fd1ae" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.712908 4867 scope.go:117] "RemoveContainer" containerID="3fa995ef601388e8736a9d4aa20808b116cc7d1e49d81f0ea5e293c686cde4bc" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.733006 4867 scope.go:117] "RemoveContainer" containerID="3c31506024059b21eb8d0fcc86edcf4fb85ca1086da41d7b7e27691b90e98d34" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.752144 4867 scope.go:117] "RemoveContainer" containerID="d125445cf27017f0fa2ac1f123eba9569454b4081f0bfeffaadf5b7ac071388a" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.778156 4867 scope.go:117] "RemoveContainer" containerID="8f5fff400ff1311d79aa26d4ad427999cda55c9215a75c4d5660670cdf09962b" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.802711 4867 scope.go:117] "RemoveContainer" containerID="fdb2125bd3a23667e6cc7ce47ecc814f688ace17ef47a446b046e25908500e03" Jan 26 11:46:36 crc kubenswrapper[4867]: I0126 11:46:36.825421 4867 scope.go:117] "RemoveContainer" containerID="a674678dfd1454a894bf5548d8d62962409fb2a59015d097a3d44c315fda04ef" Jan 26 11:46:41 crc kubenswrapper[4867]: I0126 11:46:41.047353 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zzdf7"] Jan 26 11:46:41 crc kubenswrapper[4867]: I0126 11:46:41.056962 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4whss"] Jan 26 11:46:41 crc kubenswrapper[4867]: I0126 11:46:41.067710 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mjdws"] Jan 26 11:46:41 crc kubenswrapper[4867]: I0126 11:46:41.078127 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zzdf7"] Jan 26 11:46:41 crc kubenswrapper[4867]: I0126 11:46:41.087342 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4whss"] Jan 26 11:46:41 crc kubenswrapper[4867]: I0126 11:46:41.097427 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mjdws"] Jan 26 11:46:42 crc kubenswrapper[4867]: I0126 11:46:42.577766 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c210d27-ca0b-4d51-b462-bc5adf4dbe43" path="/var/lib/kubelet/pods/5c210d27-ca0b-4d51-b462-bc5adf4dbe43/volumes" Jan 26 11:46:42 crc kubenswrapper[4867]: I0126 11:46:42.578360 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95310b01-10f6-410f-9153-d2cd939420ec" path="/var/lib/kubelet/pods/95310b01-10f6-410f-9153-d2cd939420ec/volumes" Jan 26 11:46:42 crc kubenswrapper[4867]: I0126 11:46:42.578904 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa78acbb-8b93-4977-8ccf-fc79314b6f2e" path="/var/lib/kubelet/pods/fa78acbb-8b93-4977-8ccf-fc79314b6f2e/volumes" Jan 26 11:46:46 crc kubenswrapper[4867]: I0126 11:46:46.563679 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:46:46 crc kubenswrapper[4867]: E0126 11:46:46.564483 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:46:59 crc kubenswrapper[4867]: I0126 11:46:59.563929 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:46:59 crc kubenswrapper[4867]: E0126 11:46:59.565134 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:47:04 crc kubenswrapper[4867]: I0126 11:47:04.045281 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-v7xht"] Jan 26 11:47:04 crc kubenswrapper[4867]: I0126 11:47:04.054874 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-v7xht"] Jan 26 11:47:04 crc kubenswrapper[4867]: I0126 11:47:04.583806 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee786d6-3c88-4374-a028-3a3c83b30fec" path="/var/lib/kubelet/pods/0ee786d6-3c88-4374-a028-3a3c83b30fec/volumes" Jan 26 11:47:06 crc kubenswrapper[4867]: I0126 11:47:06.034348 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-m72k2"] Jan 26 11:47:06 crc kubenswrapper[4867]: I0126 11:47:06.045886 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-m72k2"] Jan 26 11:47:06 crc kubenswrapper[4867]: I0126 11:47:06.574649 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e847de-1c0c-4ac3-b7ff-c41bfa7a6534" path="/var/lib/kubelet/pods/75e847de-1c0c-4ac3-b7ff-c41bfa7a6534/volumes" Jan 26 11:47:10 crc kubenswrapper[4867]: I0126 11:47:10.026461 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2kgmw"] Jan 26 11:47:10 crc kubenswrapper[4867]: I0126 11:47:10.033663 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2kgmw"] Jan 26 11:47:10 crc kubenswrapper[4867]: I0126 11:47:10.578305 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28fe2ce-f40e-4f37-9d27-57d14376fc5d" path="/var/lib/kubelet/pods/d28fe2ce-f40e-4f37-9d27-57d14376fc5d/volumes" Jan 26 11:47:14 crc kubenswrapper[4867]: I0126 11:47:14.564420 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:47:14 crc kubenswrapper[4867]: E0126 11:47:14.564954 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:47:24 crc kubenswrapper[4867]: I0126 11:47:24.056087 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-blwcj"] Jan 26 11:47:24 crc kubenswrapper[4867]: I0126 11:47:24.065070 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-blwcj"] Jan 26 11:47:24 crc kubenswrapper[4867]: I0126 11:47:24.574335 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7baafbd6-fc39-426c-8869-460ad4ff235f" path="/var/lib/kubelet/pods/7baafbd6-fc39-426c-8869-460ad4ff235f/volumes" Jan 26 11:47:26 crc kubenswrapper[4867]: I0126 11:47:26.564991 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:47:26 crc kubenswrapper[4867]: E0126 11:47:26.567570 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:47:29 crc kubenswrapper[4867]: I0126 11:47:29.024095 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-6c39-account-create-update-thslq"] Jan 26 11:47:29 crc kubenswrapper[4867]: I0126 11:47:29.032210 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-6c39-account-create-update-thslq"] Jan 26 11:47:30 crc kubenswrapper[4867]: I0126 11:47:30.601768 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a2ed9f-c519-444f-922f-4cebf5b3893e" path="/var/lib/kubelet/pods/29a2ed9f-c519-444f-922f-4cebf5b3893e/volumes" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.105905 4867 scope.go:117] "RemoveContainer" containerID="d2878d17d4d05221e224e3e6a7d178637778cbcdc20b459309d7ee9af2ef93a2" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.152624 4867 scope.go:117] "RemoveContainer" containerID="09ce249d9ce50194ab4edc8047525c8a71645522228c8d402991237724e02ed1" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.226749 4867 scope.go:117] "RemoveContainer" containerID="f84b09d65e7130b02b289d6cdfe83c3c3ffb0ac581c432c663743106dbc1a290" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.265460 4867 scope.go:117] "RemoveContainer" containerID="1651081e3e71989b143ba29eb5321a628e4ecf50a4caa7578e4fa1cc3dd87ad3" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.300092 4867 scope.go:117] "RemoveContainer" containerID="3252b7222a6b1e64e2f23fc987ca7f51f1a67b89435f0ce9db433d0b986e0667" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.329754 4867 scope.go:117] "RemoveContainer" containerID="56eb17817f1a9ae4d04be82a79ad6e5d9ee52e4a1df45b9925caa83b5e230843" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.393992 4867 scope.go:117] "RemoveContainer" containerID="6f0784e823858add447f521977cff127528ceb154c6b2e3482025571f19e0e8c" Jan 26 11:47:37 crc kubenswrapper[4867]: I0126 11:47:37.421028 4867 scope.go:117] "RemoveContainer" containerID="96193fd60d2ddebc9095a2f0963bc61584546f0b1c70d3dbd3877b31fa00842c" Jan 26 11:47:40 crc kubenswrapper[4867]: I0126 11:47:40.574969 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:47:40 crc kubenswrapper[4867]: E0126 11:47:40.576871 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.050331 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zvxt9"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.058755 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rhnn5"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.067072 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b41e-account-create-update-jbll2"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.077246 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jxjlp"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.084589 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b41e-account-create-update-jbll2"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.090476 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zvxt9"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.098149 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rhnn5"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.104885 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d424-account-create-update-ldcwb"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.111035 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jxjlp"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.117394 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d424-account-create-update-ldcwb"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.124623 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bab4-account-create-update-zsjwg"] Jan 26 11:47:45 crc kubenswrapper[4867]: I0126 11:47:45.129899 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-bab4-account-create-update-zsjwg"] Jan 26 11:47:46 crc kubenswrapper[4867]: I0126 11:47:46.584025 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bca6d10-3712-4078-885f-ff14590bbbe8" path="/var/lib/kubelet/pods/0bca6d10-3712-4078-885f-ff14590bbbe8/volumes" Jan 26 11:47:46 crc kubenswrapper[4867]: I0126 11:47:46.585664 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af" path="/var/lib/kubelet/pods/10fb1d1b-a85c-4eb8-a5ae-04d49b5ef7af/volumes" Jan 26 11:47:46 crc kubenswrapper[4867]: I0126 11:47:46.586989 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4d3e01-1c2e-45ae-952f-c05b658b2aa4" path="/var/lib/kubelet/pods/7f4d3e01-1c2e-45ae-952f-c05b658b2aa4/volumes" Jan 26 11:47:46 crc kubenswrapper[4867]: I0126 11:47:46.588447 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e79247-9d54-4108-a975-17c7603c3f96" path="/var/lib/kubelet/pods/91e79247-9d54-4108-a975-17c7603c3f96/volumes" Jan 26 11:47:46 crc kubenswrapper[4867]: I0126 11:47:46.590749 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a" path="/var/lib/kubelet/pods/db3da9ad-c4e2-4dc6-aec5-fefa3d9efa8a/volumes" Jan 26 11:47:46 crc kubenswrapper[4867]: I0126 11:47:46.591864 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f576d352-22e9-427b-a2d1-81bff0a85eb1" path="/var/lib/kubelet/pods/f576d352-22e9-427b-a2d1-81bff0a85eb1/volumes" Jan 26 11:47:51 crc kubenswrapper[4867]: I0126 11:47:51.564622 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:47:51 crc kubenswrapper[4867]: E0126 11:47:51.565611 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:48:02 crc kubenswrapper[4867]: I0126 11:48:02.583370 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:48:02 crc kubenswrapper[4867]: E0126 11:48:02.585432 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:48:14 crc kubenswrapper[4867]: I0126 11:48:14.564402 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:48:14 crc kubenswrapper[4867]: E0126 11:48:14.565247 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:48:27 crc kubenswrapper[4867]: I0126 11:48:27.564994 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:48:27 crc kubenswrapper[4867]: E0126 11:48:27.565827 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:48:37 crc kubenswrapper[4867]: I0126 11:48:37.599144 4867 scope.go:117] "RemoveContainer" containerID="edefe5fa9f77f38dad2a162f47c38dfa150661aec613e44f236eaed1f74fe7b0" Jan 26 11:48:37 crc kubenswrapper[4867]: I0126 11:48:37.625116 4867 scope.go:117] "RemoveContainer" containerID="224001dc6b53cc4709a74659f966d8e760d90e26822b7291b488ce839e843158" Jan 26 11:48:37 crc kubenswrapper[4867]: I0126 11:48:37.669388 4867 scope.go:117] "RemoveContainer" containerID="4636a0d19e0684652c3d8874bfa02095a5fdd63116c50b3fe8653a7be858fb7c" Jan 26 11:48:37 crc kubenswrapper[4867]: I0126 11:48:37.714108 4867 scope.go:117] "RemoveContainer" containerID="c4346b76358d61eef41e7eb32623539e21c4bd95540c30c0a5fba2dd48a383d1" Jan 26 11:48:37 crc kubenswrapper[4867]: I0126 11:48:37.759283 4867 scope.go:117] "RemoveContainer" containerID="d84850c6e99bb1cb07f9e9a3294fa187af413dbbbab5b9a89c1d48b0eee20fdc" Jan 26 11:48:37 crc kubenswrapper[4867]: I0126 11:48:37.807682 4867 scope.go:117] "RemoveContainer" containerID="748bfbd4b8c19b4b96f9c28ec3129fea256335879fd3e35db00865f8c6cc4910" Jan 26 11:48:41 crc kubenswrapper[4867]: I0126 11:48:41.563972 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:48:43 crc kubenswrapper[4867]: I0126 11:48:43.539489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"da5f1f9c98d3acd70884c452982ea6128d73027a10995d07ccd0b36f768b7132"} Jan 26 11:48:55 crc kubenswrapper[4867]: I0126 11:48:55.037903 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh5xw"] Jan 26 11:48:55 crc kubenswrapper[4867]: I0126 11:48:55.045246 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lh5xw"] Jan 26 11:48:56 crc kubenswrapper[4867]: I0126 11:48:56.579966 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39653949-816a-4237-91ab-e0a3cbdc1ff9" path="/var/lib/kubelet/pods/39653949-816a-4237-91ab-e0a3cbdc1ff9/volumes" Jan 26 11:49:18 crc kubenswrapper[4867]: I0126 11:49:18.033380 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-26gmr"] Jan 26 11:49:18 crc kubenswrapper[4867]: I0126 11:49:18.041787 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-26gmr"] Jan 26 11:49:18 crc kubenswrapper[4867]: I0126 11:49:18.573717 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2" path="/var/lib/kubelet/pods/cc4ef2e8-1b6a-42a2-a887-7a14de7fa7a2/volumes" Jan 26 11:49:21 crc kubenswrapper[4867]: I0126 11:49:21.030497 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-226m5"] Jan 26 11:49:21 crc kubenswrapper[4867]: I0126 11:49:21.040866 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-226m5"] Jan 26 11:49:22 crc kubenswrapper[4867]: I0126 11:49:22.574146 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da74a00-2497-4e45-9419-032e9b97c401" path="/var/lib/kubelet/pods/0da74a00-2497-4e45-9419-032e9b97c401/volumes" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.518175 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dcwln/must-gather-wvdxj"] Jan 26 11:49:34 crc kubenswrapper[4867]: E0126 11:49:34.519151 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1387e72e-bdcc-4deb-94fe-8cd4f8d0707b" containerName="collect-profiles" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.519162 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1387e72e-bdcc-4deb-94fe-8cd4f8d0707b" containerName="collect-profiles" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.519408 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1387e72e-bdcc-4deb-94fe-8cd4f8d0707b" containerName="collect-profiles" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.525434 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.535024 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dcwln"/"openshift-service-ca.crt" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.535083 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dcwln"/"kube-root-ca.crt" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.548955 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dcwln/must-gather-wvdxj"] Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.689424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b957382d-a2ad-4564-89ab-9c009ca57825-must-gather-output\") pod \"must-gather-wvdxj\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.689582 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbsv\" (UniqueName: \"kubernetes.io/projected/b957382d-a2ad-4564-89ab-9c009ca57825-kube-api-access-rqbsv\") pod \"must-gather-wvdxj\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.791331 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbsv\" (UniqueName: \"kubernetes.io/projected/b957382d-a2ad-4564-89ab-9c009ca57825-kube-api-access-rqbsv\") pod \"must-gather-wvdxj\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.791460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b957382d-a2ad-4564-89ab-9c009ca57825-must-gather-output\") pod \"must-gather-wvdxj\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.791840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b957382d-a2ad-4564-89ab-9c009ca57825-must-gather-output\") pod \"must-gather-wvdxj\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.823018 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbsv\" (UniqueName: \"kubernetes.io/projected/b957382d-a2ad-4564-89ab-9c009ca57825-kube-api-access-rqbsv\") pod \"must-gather-wvdxj\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:34 crc kubenswrapper[4867]: I0126 11:49:34.847343 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:49:35 crc kubenswrapper[4867]: I0126 11:49:35.291324 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dcwln/must-gather-wvdxj"] Jan 26 11:49:35 crc kubenswrapper[4867]: I0126 11:49:35.298485 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 11:49:36 crc kubenswrapper[4867]: I0126 11:49:36.014006 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/must-gather-wvdxj" event={"ID":"b957382d-a2ad-4564-89ab-9c009ca57825","Type":"ContainerStarted","Data":"c8b155ddeb1940ae51eecdd36ef3aba817efe22ed30546aee27689ffda16d0b0"} Jan 26 11:49:37 crc kubenswrapper[4867]: I0126 11:49:37.972412 4867 scope.go:117] "RemoveContainer" containerID="d51c5079a8c58b62fc0b3fbaf13fc05462f1920787f6185f11d57cab3cac3ab6" Jan 26 11:49:38 crc kubenswrapper[4867]: I0126 11:49:38.031788 4867 scope.go:117] "RemoveContainer" containerID="e156b077c74155b81316368291323c0695dd45981ecd192fdf74b0e23fda9bad" Jan 26 11:49:38 crc kubenswrapper[4867]: I0126 11:49:38.099937 4867 scope.go:117] "RemoveContainer" containerID="ce71285d4baf1e7a59b451bb335bb3a4518efd9973a1febff4f9e67e53860a51" Jan 26 11:49:51 crc kubenswrapper[4867]: I0126 11:49:51.160936 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/must-gather-wvdxj" event={"ID":"b957382d-a2ad-4564-89ab-9c009ca57825","Type":"ContainerStarted","Data":"016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24"} Jan 26 11:49:51 crc kubenswrapper[4867]: I0126 11:49:51.161844 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/must-gather-wvdxj" event={"ID":"b957382d-a2ad-4564-89ab-9c009ca57825","Type":"ContainerStarted","Data":"204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb"} Jan 26 11:49:51 crc kubenswrapper[4867]: I0126 11:49:51.182160 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dcwln/must-gather-wvdxj" podStartSLOduration=2.318958594 podStartE2EDuration="17.182134288s" podCreationTimestamp="2026-01-26 11:49:34 +0000 UTC" firstStartedPulling="2026-01-26 11:49:35.29843329 +0000 UTC m=+1924.997008190" lastFinishedPulling="2026-01-26 11:49:50.161608964 +0000 UTC m=+1939.860183884" observedRunningTime="2026-01-26 11:49:51.174801307 +0000 UTC m=+1940.873376297" watchObservedRunningTime="2026-01-26 11:49:51.182134288 +0000 UTC m=+1940.880709198" Jan 26 11:49:53 crc kubenswrapper[4867]: E0126 11:49:53.040307 4867 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.115:41570->38.102.83.115:38025: write tcp 38.102.83.115:41570->38.102.83.115:38025: write: connection reset by peer Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.110130 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dcwln/crc-debug-v58xk"] Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.111276 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.113501 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dcwln"/"default-dockercfg-ht8hv" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.226950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf796477-4a55-46d5-87a9-674e7a9993ec-host\") pod \"crc-debug-v58xk\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.227341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ltpc\" (UniqueName: \"kubernetes.io/projected/bf796477-4a55-46d5-87a9-674e7a9993ec-kube-api-access-8ltpc\") pod \"crc-debug-v58xk\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.328670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf796477-4a55-46d5-87a9-674e7a9993ec-host\") pod \"crc-debug-v58xk\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.328727 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ltpc\" (UniqueName: \"kubernetes.io/projected/bf796477-4a55-46d5-87a9-674e7a9993ec-kube-api-access-8ltpc\") pod \"crc-debug-v58xk\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.328842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf796477-4a55-46d5-87a9-674e7a9993ec-host\") pod \"crc-debug-v58xk\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.360323 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ltpc\" (UniqueName: \"kubernetes.io/projected/bf796477-4a55-46d5-87a9-674e7a9993ec-kube-api-access-8ltpc\") pod \"crc-debug-v58xk\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: I0126 11:49:54.432335 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:49:54 crc kubenswrapper[4867]: W0126 11:49:54.459944 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf796477_4a55_46d5_87a9_674e7a9993ec.slice/crio-c6888e30cfee3ee0d9fef5a9f0b0b92f7ada3630f5f6d6c365d519f0bd9cb64d WatchSource:0}: Error finding container c6888e30cfee3ee0d9fef5a9f0b0b92f7ada3630f5f6d6c365d519f0bd9cb64d: Status 404 returned error can't find the container with id c6888e30cfee3ee0d9fef5a9f0b0b92f7ada3630f5f6d6c365d519f0bd9cb64d Jan 26 11:49:55 crc kubenswrapper[4867]: I0126 11:49:55.194700 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-v58xk" event={"ID":"bf796477-4a55-46d5-87a9-674e7a9993ec","Type":"ContainerStarted","Data":"c6888e30cfee3ee0d9fef5a9f0b0b92f7ada3630f5f6d6c365d519f0bd9cb64d"} Jan 26 11:50:04 crc kubenswrapper[4867]: I0126 11:50:04.041457 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-f7x8z"] Jan 26 11:50:04 crc kubenswrapper[4867]: I0126 11:50:04.048744 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-f7x8z"] Jan 26 11:50:04 crc kubenswrapper[4867]: I0126 11:50:04.576630 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa" path="/var/lib/kubelet/pods/c4a7d7f8-2972-4e37-a039-f6cfd3d0fbaa/volumes" Jan 26 11:50:24 crc kubenswrapper[4867]: E0126 11:50:24.841442 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 26 11:50:24 crc kubenswrapper[4867]: E0126 11:50:24.842159 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ltpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-v58xk_openshift-must-gather-dcwln(bf796477-4a55-46d5-87a9-674e7a9993ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 11:50:24 crc kubenswrapper[4867]: E0126 11:50:24.843703 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-dcwln/crc-debug-v58xk" podUID="bf796477-4a55-46d5-87a9-674e7a9993ec" Jan 26 11:50:25 crc kubenswrapper[4867]: E0126 11:50:25.445831 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-dcwln/crc-debug-v58xk" podUID="bf796477-4a55-46d5-87a9-674e7a9993ec" Jan 26 11:50:38 crc kubenswrapper[4867]: I0126 11:50:38.216940 4867 scope.go:117] "RemoveContainer" containerID="ce1da23e7255f27d9a6dcc9e3c5f233195dd856b1293f845768b80b723fafe8c" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.199885 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kh86r"] Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.206868 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.236044 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kh86r"] Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.323823 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-catalog-content\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.324208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-utilities\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.325154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdbv4\" (UniqueName: \"kubernetes.io/projected/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-kube-api-access-qdbv4\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.427422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-catalog-content\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.427500 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-utilities\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.427547 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdbv4\" (UniqueName: \"kubernetes.io/projected/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-kube-api-access-qdbv4\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.428182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-catalog-content\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.428383 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-utilities\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.457901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdbv4\" (UniqueName: \"kubernetes.io/projected/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-kube-api-access-qdbv4\") pod \"redhat-operators-kh86r\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:41 crc kubenswrapper[4867]: I0126 11:50:41.541863 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:42 crc kubenswrapper[4867]: I0126 11:50:42.114763 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kh86r"] Jan 26 11:50:42 crc kubenswrapper[4867]: W0126 11:50:42.121244 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f0582f6_2d56_47c0_a2a9_b891c686f4b8.slice/crio-f9051fe4c8fd49ec50c0fb7e867be02cfd5cf95c6ba34cdb0f2f495049dd1661 WatchSource:0}: Error finding container f9051fe4c8fd49ec50c0fb7e867be02cfd5cf95c6ba34cdb0f2f495049dd1661: Status 404 returned error can't find the container with id f9051fe4c8fd49ec50c0fb7e867be02cfd5cf95c6ba34cdb0f2f495049dd1661 Jan 26 11:50:42 crc kubenswrapper[4867]: I0126 11:50:42.590757 4867 generic.go:334] "Generic (PLEG): container finished" podID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerID="8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6" exitCode=0 Jan 26 11:50:42 crc kubenswrapper[4867]: I0126 11:50:42.590851 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh86r" event={"ID":"6f0582f6-2d56-47c0-a2a9-b891c686f4b8","Type":"ContainerDied","Data":"8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6"} Jan 26 11:50:42 crc kubenswrapper[4867]: I0126 11:50:42.591140 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh86r" event={"ID":"6f0582f6-2d56-47c0-a2a9-b891c686f4b8","Type":"ContainerStarted","Data":"f9051fe4c8fd49ec50c0fb7e867be02cfd5cf95c6ba34cdb0f2f495049dd1661"} Jan 26 11:50:42 crc kubenswrapper[4867]: I0126 11:50:42.593035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-v58xk" event={"ID":"bf796477-4a55-46d5-87a9-674e7a9993ec","Type":"ContainerStarted","Data":"e4245bccf8a6fc432bf1e4c1a26b2b6046b242635f99a0cd3cf960ecb0aea662"} Jan 26 11:50:42 crc kubenswrapper[4867]: I0126 11:50:42.631573 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dcwln/crc-debug-v58xk" podStartSLOduration=2.133988467 podStartE2EDuration="48.631556513s" podCreationTimestamp="2026-01-26 11:49:54 +0000 UTC" firstStartedPulling="2026-01-26 11:49:54.462041798 +0000 UTC m=+1944.160616708" lastFinishedPulling="2026-01-26 11:50:40.959609834 +0000 UTC m=+1990.658184754" observedRunningTime="2026-01-26 11:50:42.628473728 +0000 UTC m=+1992.327048638" watchObservedRunningTime="2026-01-26 11:50:42.631556513 +0000 UTC m=+1992.330131423" Jan 26 11:50:43 crc kubenswrapper[4867]: I0126 11:50:43.604011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh86r" event={"ID":"6f0582f6-2d56-47c0-a2a9-b891c686f4b8","Type":"ContainerStarted","Data":"9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f"} Jan 26 11:50:44 crc kubenswrapper[4867]: I0126 11:50:44.616209 4867 generic.go:334] "Generic (PLEG): container finished" podID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerID="9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f" exitCode=0 Jan 26 11:50:44 crc kubenswrapper[4867]: I0126 11:50:44.616522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh86r" event={"ID":"6f0582f6-2d56-47c0-a2a9-b891c686f4b8","Type":"ContainerDied","Data":"9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f"} Jan 26 11:50:46 crc kubenswrapper[4867]: I0126 11:50:46.635903 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh86r" event={"ID":"6f0582f6-2d56-47c0-a2a9-b891c686f4b8","Type":"ContainerStarted","Data":"40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc"} Jan 26 11:50:47 crc kubenswrapper[4867]: I0126 11:50:47.660510 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kh86r" podStartSLOduration=3.692871667 podStartE2EDuration="6.660493838s" podCreationTimestamp="2026-01-26 11:50:41 +0000 UTC" firstStartedPulling="2026-01-26 11:50:42.592086158 +0000 UTC m=+1992.290661068" lastFinishedPulling="2026-01-26 11:50:45.559708299 +0000 UTC m=+1995.258283239" observedRunningTime="2026-01-26 11:50:47.65615503 +0000 UTC m=+1997.354729940" watchObservedRunningTime="2026-01-26 11:50:47.660493838 +0000 UTC m=+1997.359068748" Jan 26 11:50:51 crc kubenswrapper[4867]: I0126 11:50:51.542744 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:51 crc kubenswrapper[4867]: I0126 11:50:51.543539 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:50:52 crc kubenswrapper[4867]: I0126 11:50:52.603362 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kh86r" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="registry-server" probeResult="failure" output=< Jan 26 11:50:52 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 26 11:50:52 crc kubenswrapper[4867]: > Jan 26 11:51:02 crc kubenswrapper[4867]: I0126 11:51:02.604859 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kh86r" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="registry-server" probeResult="failure" output=< Jan 26 11:51:02 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 26 11:51:02 crc kubenswrapper[4867]: > Jan 26 11:51:06 crc kubenswrapper[4867]: I0126 11:51:06.294354 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:51:06 crc kubenswrapper[4867]: I0126 11:51:06.295277 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:51:11 crc kubenswrapper[4867]: I0126 11:51:11.586466 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:51:11 crc kubenswrapper[4867]: I0126 11:51:11.634021 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:51:11 crc kubenswrapper[4867]: I0126 11:51:11.830187 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kh86r"] Jan 26 11:51:12 crc kubenswrapper[4867]: I0126 11:51:12.862371 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kh86r" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="registry-server" containerID="cri-o://40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc" gracePeriod=2 Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.305238 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.381994 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-utilities\") pod \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.382082 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdbv4\" (UniqueName: \"kubernetes.io/projected/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-kube-api-access-qdbv4\") pod \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.382341 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-catalog-content\") pod \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\" (UID: \"6f0582f6-2d56-47c0-a2a9-b891c686f4b8\") " Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.383751 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-utilities" (OuterVolumeSpecName: "utilities") pod "6f0582f6-2d56-47c0-a2a9-b891c686f4b8" (UID: "6f0582f6-2d56-47c0-a2a9-b891c686f4b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.389689 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-kube-api-access-qdbv4" (OuterVolumeSpecName: "kube-api-access-qdbv4") pod "6f0582f6-2d56-47c0-a2a9-b891c686f4b8" (UID: "6f0582f6-2d56-47c0-a2a9-b891c686f4b8"). InnerVolumeSpecName "kube-api-access-qdbv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.485051 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.485096 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdbv4\" (UniqueName: \"kubernetes.io/projected/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-kube-api-access-qdbv4\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.512476 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f0582f6-2d56-47c0-a2a9-b891c686f4b8" (UID: "6f0582f6-2d56-47c0-a2a9-b891c686f4b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.587372 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f0582f6-2d56-47c0-a2a9-b891c686f4b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.870756 4867 generic.go:334] "Generic (PLEG): container finished" podID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerID="40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc" exitCode=0 Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.870797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh86r" event={"ID":"6f0582f6-2d56-47c0-a2a9-b891c686f4b8","Type":"ContainerDied","Data":"40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc"} Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.870821 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh86r" event={"ID":"6f0582f6-2d56-47c0-a2a9-b891c686f4b8","Type":"ContainerDied","Data":"f9051fe4c8fd49ec50c0fb7e867be02cfd5cf95c6ba34cdb0f2f495049dd1661"} Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.870836 4867 scope.go:117] "RemoveContainer" containerID="40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.870946 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh86r" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.910923 4867 scope.go:117] "RemoveContainer" containerID="9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.919619 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kh86r"] Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.931335 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kh86r"] Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.944433 4867 scope.go:117] "RemoveContainer" containerID="8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.985510 4867 scope.go:117] "RemoveContainer" containerID="40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc" Jan 26 11:51:13 crc kubenswrapper[4867]: E0126 11:51:13.986099 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc\": container with ID starting with 40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc not found: ID does not exist" containerID="40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.986159 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc"} err="failed to get container status \"40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc\": rpc error: code = NotFound desc = could not find container \"40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc\": container with ID starting with 40f4fc7480217dbed9eafed17315f5efdf135240f76432d80c022010dc932dbc not found: ID does not exist" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.986183 4867 scope.go:117] "RemoveContainer" containerID="9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f" Jan 26 11:51:13 crc kubenswrapper[4867]: E0126 11:51:13.986639 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f\": container with ID starting with 9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f not found: ID does not exist" containerID="9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.986678 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f"} err="failed to get container status \"9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f\": rpc error: code = NotFound desc = could not find container \"9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f\": container with ID starting with 9958da7cdbb5c3a00012813b9e0f0e972a0c309d36cb4ff23ec4de00a163da0f not found: ID does not exist" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.986703 4867 scope.go:117] "RemoveContainer" containerID="8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6" Jan 26 11:51:13 crc kubenswrapper[4867]: E0126 11:51:13.986946 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6\": container with ID starting with 8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6 not found: ID does not exist" containerID="8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6" Jan 26 11:51:13 crc kubenswrapper[4867]: I0126 11:51:13.986979 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6"} err="failed to get container status \"8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6\": rpc error: code = NotFound desc = could not find container \"8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6\": container with ID starting with 8cc6adedd1418d5ccfacdbfa0b5c1746f492f103f25703b74cf4118622dcc2f6 not found: ID does not exist" Jan 26 11:51:14 crc kubenswrapper[4867]: I0126 11:51:14.587926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" path="/var/lib/kubelet/pods/6f0582f6-2d56-47c0-a2a9-b891c686f4b8/volumes" Jan 26 11:51:22 crc kubenswrapper[4867]: I0126 11:51:22.951710 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf796477-4a55-46d5-87a9-674e7a9993ec" containerID="e4245bccf8a6fc432bf1e4c1a26b2b6046b242635f99a0cd3cf960ecb0aea662" exitCode=0 Jan 26 11:51:22 crc kubenswrapper[4867]: I0126 11:51:22.951906 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-v58xk" event={"ID":"bf796477-4a55-46d5-87a9-674e7a9993ec","Type":"ContainerDied","Data":"e4245bccf8a6fc432bf1e4c1a26b2b6046b242635f99a0cd3cf960ecb0aea662"} Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.066183 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.102521 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dcwln/crc-debug-v58xk"] Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.110699 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dcwln/crc-debug-v58xk"] Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.163873 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf796477-4a55-46d5-87a9-674e7a9993ec-host\") pod \"bf796477-4a55-46d5-87a9-674e7a9993ec\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.163962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ltpc\" (UniqueName: \"kubernetes.io/projected/bf796477-4a55-46d5-87a9-674e7a9993ec-kube-api-access-8ltpc\") pod \"bf796477-4a55-46d5-87a9-674e7a9993ec\" (UID: \"bf796477-4a55-46d5-87a9-674e7a9993ec\") " Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.163992 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf796477-4a55-46d5-87a9-674e7a9993ec-host" (OuterVolumeSpecName: "host") pod "bf796477-4a55-46d5-87a9-674e7a9993ec" (UID: "bf796477-4a55-46d5-87a9-674e7a9993ec"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.164515 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf796477-4a55-46d5-87a9-674e7a9993ec-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.169594 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf796477-4a55-46d5-87a9-674e7a9993ec-kube-api-access-8ltpc" (OuterVolumeSpecName: "kube-api-access-8ltpc") pod "bf796477-4a55-46d5-87a9-674e7a9993ec" (UID: "bf796477-4a55-46d5-87a9-674e7a9993ec"). InnerVolumeSpecName "kube-api-access-8ltpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.266345 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ltpc\" (UniqueName: \"kubernetes.io/projected/bf796477-4a55-46d5-87a9-674e7a9993ec-kube-api-access-8ltpc\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.576682 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf796477-4a55-46d5-87a9-674e7a9993ec" path="/var/lib/kubelet/pods/bf796477-4a55-46d5-87a9-674e7a9993ec/volumes" Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.967480 4867 scope.go:117] "RemoveContainer" containerID="e4245bccf8a6fc432bf1e4c1a26b2b6046b242635f99a0cd3cf960ecb0aea662" Jan 26 11:51:24 crc kubenswrapper[4867]: I0126 11:51:24.967814 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-v58xk" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.265550 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dcwln/crc-debug-2zjds"] Jan 26 11:51:25 crc kubenswrapper[4867]: E0126 11:51:25.265935 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf796477-4a55-46d5-87a9-674e7a9993ec" containerName="container-00" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.265946 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf796477-4a55-46d5-87a9-674e7a9993ec" containerName="container-00" Jan 26 11:51:25 crc kubenswrapper[4867]: E0126 11:51:25.265962 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="registry-server" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.265969 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="registry-server" Jan 26 11:51:25 crc kubenswrapper[4867]: E0126 11:51:25.265987 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="extract-content" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.265993 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="extract-content" Jan 26 11:51:25 crc kubenswrapper[4867]: E0126 11:51:25.266003 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="extract-utilities" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.266009 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="extract-utilities" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.266188 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf796477-4a55-46d5-87a9-674e7a9993ec" containerName="container-00" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.266232 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0582f6-2d56-47c0-a2a9-b891c686f4b8" containerName="registry-server" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.266838 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.270090 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dcwln"/"default-dockercfg-ht8hv" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.387870 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b7ecc28-68d9-455e-ad26-fb320f75c414-host\") pod \"crc-debug-2zjds\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.387959 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qghq\" (UniqueName: \"kubernetes.io/projected/8b7ecc28-68d9-455e-ad26-fb320f75c414-kube-api-access-9qghq\") pod \"crc-debug-2zjds\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.489695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b7ecc28-68d9-455e-ad26-fb320f75c414-host\") pod \"crc-debug-2zjds\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.489770 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qghq\" (UniqueName: \"kubernetes.io/projected/8b7ecc28-68d9-455e-ad26-fb320f75c414-kube-api-access-9qghq\") pod \"crc-debug-2zjds\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.489866 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b7ecc28-68d9-455e-ad26-fb320f75c414-host\") pod \"crc-debug-2zjds\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.509586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qghq\" (UniqueName: \"kubernetes.io/projected/8b7ecc28-68d9-455e-ad26-fb320f75c414-kube-api-access-9qghq\") pod \"crc-debug-2zjds\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.583928 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:25 crc kubenswrapper[4867]: I0126 11:51:25.980860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-2zjds" event={"ID":"8b7ecc28-68d9-455e-ad26-fb320f75c414","Type":"ContainerStarted","Data":"e0ccc986cff2eaca54f16156438290a3b793af6cf77e27df5ddb8275c0e20e6d"} Jan 26 11:51:26 crc kubenswrapper[4867]: I0126 11:51:26.989819 4867 generic.go:334] "Generic (PLEG): container finished" podID="8b7ecc28-68d9-455e-ad26-fb320f75c414" containerID="f1fe625b39d1529879c682554ce53289b82b7b237eaa28e9c31a1c8bbb8fa574" exitCode=0 Jan 26 11:51:26 crc kubenswrapper[4867]: I0126 11:51:26.989865 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-2zjds" event={"ID":"8b7ecc28-68d9-455e-ad26-fb320f75c414","Type":"ContainerDied","Data":"f1fe625b39d1529879c682554ce53289b82b7b237eaa28e9c31a1c8bbb8fa574"} Jan 26 11:51:27 crc kubenswrapper[4867]: I0126 11:51:27.595065 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dcwln/crc-debug-2zjds"] Jan 26 11:51:27 crc kubenswrapper[4867]: I0126 11:51:27.602188 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dcwln/crc-debug-2zjds"] Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.103861 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.244208 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qghq\" (UniqueName: \"kubernetes.io/projected/8b7ecc28-68d9-455e-ad26-fb320f75c414-kube-api-access-9qghq\") pod \"8b7ecc28-68d9-455e-ad26-fb320f75c414\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.245320 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b7ecc28-68d9-455e-ad26-fb320f75c414-host\") pod \"8b7ecc28-68d9-455e-ad26-fb320f75c414\" (UID: \"8b7ecc28-68d9-455e-ad26-fb320f75c414\") " Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.245485 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b7ecc28-68d9-455e-ad26-fb320f75c414-host" (OuterVolumeSpecName: "host") pod "8b7ecc28-68d9-455e-ad26-fb320f75c414" (UID: "8b7ecc28-68d9-455e-ad26-fb320f75c414"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.252575 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b7ecc28-68d9-455e-ad26-fb320f75c414-kube-api-access-9qghq" (OuterVolumeSpecName: "kube-api-access-9qghq") pod "8b7ecc28-68d9-455e-ad26-fb320f75c414" (UID: "8b7ecc28-68d9-455e-ad26-fb320f75c414"). InnerVolumeSpecName "kube-api-access-9qghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.347695 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b7ecc28-68d9-455e-ad26-fb320f75c414-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.347955 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qghq\" (UniqueName: \"kubernetes.io/projected/8b7ecc28-68d9-455e-ad26-fb320f75c414-kube-api-access-9qghq\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.578825 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b7ecc28-68d9-455e-ad26-fb320f75c414" path="/var/lib/kubelet/pods/8b7ecc28-68d9-455e-ad26-fb320f75c414/volumes" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.770476 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dcwln/crc-debug-hpjrz"] Jan 26 11:51:28 crc kubenswrapper[4867]: E0126 11:51:28.771009 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7ecc28-68d9-455e-ad26-fb320f75c414" containerName="container-00" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.771031 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7ecc28-68d9-455e-ad26-fb320f75c414" containerName="container-00" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.771358 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b7ecc28-68d9-455e-ad26-fb320f75c414" containerName="container-00" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.772191 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.959632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsd5d\" (UniqueName: \"kubernetes.io/projected/66d350e2-48be-4870-8cad-c9485b5f152a-kube-api-access-lsd5d\") pod \"crc-debug-hpjrz\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:28 crc kubenswrapper[4867]: I0126 11:51:28.959783 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66d350e2-48be-4870-8cad-c9485b5f152a-host\") pod \"crc-debug-hpjrz\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:29 crc kubenswrapper[4867]: I0126 11:51:29.012065 4867 scope.go:117] "RemoveContainer" containerID="f1fe625b39d1529879c682554ce53289b82b7b237eaa28e9c31a1c8bbb8fa574" Jan 26 11:51:29 crc kubenswrapper[4867]: I0126 11:51:29.012141 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-2zjds" Jan 26 11:51:29 crc kubenswrapper[4867]: I0126 11:51:29.061721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsd5d\" (UniqueName: \"kubernetes.io/projected/66d350e2-48be-4870-8cad-c9485b5f152a-kube-api-access-lsd5d\") pod \"crc-debug-hpjrz\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:29 crc kubenswrapper[4867]: I0126 11:51:29.062406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66d350e2-48be-4870-8cad-c9485b5f152a-host\") pod \"crc-debug-hpjrz\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:29 crc kubenswrapper[4867]: I0126 11:51:29.062514 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66d350e2-48be-4870-8cad-c9485b5f152a-host\") pod \"crc-debug-hpjrz\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:29 crc kubenswrapper[4867]: I0126 11:51:29.096131 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsd5d\" (UniqueName: \"kubernetes.io/projected/66d350e2-48be-4870-8cad-c9485b5f152a-kube-api-access-lsd5d\") pod \"crc-debug-hpjrz\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:29 crc kubenswrapper[4867]: I0126 11:51:29.389275 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:29 crc kubenswrapper[4867]: W0126 11:51:29.440426 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66d350e2_48be_4870_8cad_c9485b5f152a.slice/crio-7468d01aa6abbd21423eebef4a37ae5d0d43c63b1c02e148097ac07c037f4a69 WatchSource:0}: Error finding container 7468d01aa6abbd21423eebef4a37ae5d0d43c63b1c02e148097ac07c037f4a69: Status 404 returned error can't find the container with id 7468d01aa6abbd21423eebef4a37ae5d0d43c63b1c02e148097ac07c037f4a69 Jan 26 11:51:30 crc kubenswrapper[4867]: I0126 11:51:30.020568 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-hpjrz" event={"ID":"66d350e2-48be-4870-8cad-c9485b5f152a","Type":"ContainerStarted","Data":"7468d01aa6abbd21423eebef4a37ae5d0d43c63b1c02e148097ac07c037f4a69"} Jan 26 11:51:31 crc kubenswrapper[4867]: I0126 11:51:31.034181 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-hpjrz" event={"ID":"66d350e2-48be-4870-8cad-c9485b5f152a","Type":"ContainerStarted","Data":"026201f85ddfcd3f941a89a27d409d3e1c67bdaa8eff1586051f7674f3468033"} Jan 26 11:51:32 crc kubenswrapper[4867]: I0126 11:51:32.045878 4867 generic.go:334] "Generic (PLEG): container finished" podID="66d350e2-48be-4870-8cad-c9485b5f152a" containerID="026201f85ddfcd3f941a89a27d409d3e1c67bdaa8eff1586051f7674f3468033" exitCode=0 Jan 26 11:51:32 crc kubenswrapper[4867]: I0126 11:51:32.045965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/crc-debug-hpjrz" event={"ID":"66d350e2-48be-4870-8cad-c9485b5f152a","Type":"ContainerDied","Data":"026201f85ddfcd3f941a89a27d409d3e1c67bdaa8eff1586051f7674f3468033"} Jan 26 11:51:32 crc kubenswrapper[4867]: I0126 11:51:32.101951 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dcwln/crc-debug-hpjrz"] Jan 26 11:51:32 crc kubenswrapper[4867]: I0126 11:51:32.109449 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dcwln/crc-debug-hpjrz"] Jan 26 11:51:33 crc kubenswrapper[4867]: I0126 11:51:33.160022 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:33 crc kubenswrapper[4867]: I0126 11:51:33.257254 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66d350e2-48be-4870-8cad-c9485b5f152a-host\") pod \"66d350e2-48be-4870-8cad-c9485b5f152a\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " Jan 26 11:51:33 crc kubenswrapper[4867]: I0126 11:51:33.257620 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsd5d\" (UniqueName: \"kubernetes.io/projected/66d350e2-48be-4870-8cad-c9485b5f152a-kube-api-access-lsd5d\") pod \"66d350e2-48be-4870-8cad-c9485b5f152a\" (UID: \"66d350e2-48be-4870-8cad-c9485b5f152a\") " Jan 26 11:51:33 crc kubenswrapper[4867]: I0126 11:51:33.258436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66d350e2-48be-4870-8cad-c9485b5f152a-host" (OuterVolumeSpecName: "host") pod "66d350e2-48be-4870-8cad-c9485b5f152a" (UID: "66d350e2-48be-4870-8cad-c9485b5f152a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:51:33 crc kubenswrapper[4867]: I0126 11:51:33.266609 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d350e2-48be-4870-8cad-c9485b5f152a-kube-api-access-lsd5d" (OuterVolumeSpecName: "kube-api-access-lsd5d") pod "66d350e2-48be-4870-8cad-c9485b5f152a" (UID: "66d350e2-48be-4870-8cad-c9485b5f152a"). InnerVolumeSpecName "kube-api-access-lsd5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:51:33 crc kubenswrapper[4867]: I0126 11:51:33.359965 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsd5d\" (UniqueName: \"kubernetes.io/projected/66d350e2-48be-4870-8cad-c9485b5f152a-kube-api-access-lsd5d\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:33 crc kubenswrapper[4867]: I0126 11:51:33.360006 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66d350e2-48be-4870-8cad-c9485b5f152a-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:51:34 crc kubenswrapper[4867]: I0126 11:51:34.064908 4867 scope.go:117] "RemoveContainer" containerID="026201f85ddfcd3f941a89a27d409d3e1c67bdaa8eff1586051f7674f3468033" Jan 26 11:51:34 crc kubenswrapper[4867]: I0126 11:51:34.064953 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/crc-debug-hpjrz" Jan 26 11:51:34 crc kubenswrapper[4867]: I0126 11:51:34.575333 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d350e2-48be-4870-8cad-c9485b5f152a" path="/var/lib/kubelet/pods/66d350e2-48be-4870-8cad-c9485b5f152a/volumes" Jan 26 11:51:36 crc kubenswrapper[4867]: I0126 11:51:36.294298 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:51:36 crc kubenswrapper[4867]: I0126 11:51:36.294667 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:51:48 crc kubenswrapper[4867]: I0126 11:51:48.591775 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f76cb8bb6-g4zck_340554a1-e56a-4b1b-aff3-d0c0e1ac210d/barbican-api/0.log" Jan 26 11:51:48 crc kubenswrapper[4867]: I0126 11:51:48.843253 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fc6c76976-2w9dm_9a534f97-8d45-4418-af77-5e19e2013a0b/barbican-keystone-listener/0.log" Jan 26 11:51:48 crc kubenswrapper[4867]: I0126 11:51:48.845382 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f76cb8bb6-g4zck_340554a1-e56a-4b1b-aff3-d0c0e1ac210d/barbican-api-log/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.049339 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6db8644655-m8sn6_f568d082-7794-4f60-b78e-bff0b6b6356f/barbican-worker/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.053124 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fc6c76976-2w9dm_9a534f97-8d45-4418-af77-5e19e2013a0b/barbican-keystone-listener-log/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.103532 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6db8644655-m8sn6_f568d082-7794-4f60-b78e-bff0b6b6356f/barbican-worker-log/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.260117 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/ceilometer-central-agent/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.285545 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/ceilometer-notification-agent/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.292770 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/proxy-httpd/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.447417 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/sg-core/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.477015 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4a9a8906-54d6-49c2-94c7-393167d8db56/cinder-api/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.522752 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4a9a8906-54d6-49c2-94c7-393167d8db56/cinder-api-log/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.706896 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75/cinder-scheduler/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.711022 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75/probe/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.867257 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9fj8m_facba8bd-34c0-43a2-a31b-cc7a6ff17ba2/init/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.991099 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9fj8m_facba8bd-34c0-43a2-a31b-cc7a6ff17ba2/init/0.log" Jan 26 11:51:49 crc kubenswrapper[4867]: I0126 11:51:49.995357 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9fj8m_facba8bd-34c0-43a2-a31b-cc7a6ff17ba2/dnsmasq-dns/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.086725 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6fc7f66f-7989-42ac-a3c8-cd88b25f9c53/glance-httpd/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.224242 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6fc7f66f-7989-42ac-a3c8-cd88b25f9c53/glance-log/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.239661 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_58cc3b2f-c49e-4c16-9a26-342c8b2c8878/glance-httpd/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.276564 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_58cc3b2f-c49e-4c16-9a26-342c8b2c8878/glance-log/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.407439 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/init/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.777992 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/ironic-api-log/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.827186 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/init/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.893068 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/ironic-api/0.log" Jan 26 11:51:50 crc kubenswrapper[4867]: I0126 11:51:50.966913 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 11:51:51 crc kubenswrapper[4867]: I0126 11:51:51.140748 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 11:51:51 crc kubenswrapper[4867]: I0126 11:51:51.147747 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 11:51:51 crc kubenswrapper[4867]: I0126 11:51:51.192579 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 11:51:51 crc kubenswrapper[4867]: I0126 11:51:51.460701 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 11:51:51 crc kubenswrapper[4867]: I0126 11:51:51.484868 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 11:51:51 crc kubenswrapper[4867]: I0126 11:51:51.854367 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 11:51:52 crc kubenswrapper[4867]: I0126 11:51:52.030700 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 11:51:52 crc kubenswrapper[4867]: I0126 11:51:52.414741 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 11:51:52 crc kubenswrapper[4867]: I0126 11:51:52.582795 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/httpboot/0.log" Jan 26 11:51:52 crc kubenswrapper[4867]: I0126 11:51:52.787042 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 11:51:52 crc kubenswrapper[4867]: I0126 11:51:52.838018 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-conductor/0.log" Jan 26 11:51:52 crc kubenswrapper[4867]: I0126 11:51:52.981754 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ramdisk-logs/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.032696 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.086498 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-h7r88_3de6837e-5965-48ce-9967-2d259829ad4a/init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.277762 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.299385 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-h7r88_3de6837e-5965-48ce-9967-2d259829ad4a/ironic-db-sync/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.339325 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-h7r88_3de6837e-5965-48ce-9967-2d259829ad4a/init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.364869 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-python-agent-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.501825 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-python-agent-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.527316 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-pxe-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.538907 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-pxe-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.724465 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-python-agent-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.771925 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector/1.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.791139 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-pxe-init/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.793702 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-httpboot/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.812351 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector/2.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.936535 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector-httpd/1.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.961444 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector-httpd/0.log" Jan 26 11:51:53 crc kubenswrapper[4867]: I0126 11:51:53.991482 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-256sm_586082ca-8462-421f-940d-25a9e1a9e945/ironic-inspector-db-sync/0.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.014262 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ramdisk-logs/0.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.183784 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-795fb7c76b-9ndwh_a2167905-2856-4125-81fd-a2430fe558f9/ironic-neutron-agent/3.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.188282 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-795fb7c76b-9ndwh_a2167905-2856-4125-81fd-a2430fe558f9/ironic-neutron-agent/4.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.420431 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cd1d027e-98b3-4c45-981e-a60ad4cb8748/kube-state-metrics/0.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.462302 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f94776d6f-8b6q4_6fa27242-a46c-4987-9e2f-1f9d48b370e7/keystone-api/0.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.698607 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-647b685f9-49zj6_ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9/neutron-httpd/0.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.750994 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-647b685f9-49zj6_ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9/neutron-api/0.log" Jan 26 11:51:54 crc kubenswrapper[4867]: I0126 11:51:54.994337 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_738787a7-6f5f-48f1-8c43-ce02e88eb732/nova-api-log/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.021555 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_738787a7-6f5f-48f1-8c43-ce02e88eb732/nova-api-api/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.085765 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8ad13a23-f9ee-40f2-aa88-3940ced23279/nova-cell0-conductor-conductor/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.310646 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_20519d4e-b9eb-43b2-b2fb-ac40a9bea288/nova-cell1-conductor-conductor/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.389884 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_dc6a54c4-4229-4157-a5a0-a2089d6a7131/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.701162 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7/nova-metadata-log/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.857070 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a/nova-scheduler-scheduler/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.971506 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7/nova-metadata-metadata/0.log" Jan 26 11:51:55 crc kubenswrapper[4867]: I0126 11:51:55.975179 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd3b4566-15b8-4c50-bc5e-76c5a6907311/mysql-bootstrap/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.160719 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd3b4566-15b8-4c50-bc5e-76c5a6907311/galera/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.184654 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd3b4566-15b8-4c50-bc5e-76c5a6907311/mysql-bootstrap/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.240973 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9305cd67-bbb5-45e9-ab35-6a34a717dff8/mysql-bootstrap/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.460483 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9305cd67-bbb5-45e9-ab35-6a34a717dff8/mysql-bootstrap/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.495639 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0dba3b09-195d-416a-b4af-7f252c8abd0d/openstackclient/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.531885 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9305cd67-bbb5-45e9-ab35-6a34a717dff8/galera/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.737457 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hbpxr_db65f713-855b-4ca7-b989-ebde989474ce/ovn-controller/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.774562 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wsrcd_515623f1-c4bb-4522-ab0d-00138e1d0d0d/openstack-network-exporter/0.log" Jan 26 11:51:56 crc kubenswrapper[4867]: I0126 11:51:56.945194 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovsdb-server-init/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.173714 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovsdb-server-init/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.221591 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovsdb-server/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.226907 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovs-vswitchd/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.393463 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2b5a7e41-130f-46be-8c94-a5ecaf39bb2c/openstack-network-exporter/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.505996 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_28f25dc5-093b-4b0a-b1fa-290241e9bccc/openstack-network-exporter/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.508461 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2b5a7e41-130f-46be-8c94-a5ecaf39bb2c/ovn-northd/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.723204 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_28f25dc5-093b-4b0a-b1fa-290241e9bccc/ovsdbserver-nb/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.768891 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_24fccd97-ac62-4d86-971f-59e4fc780888/openstack-network-exporter/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.798623 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_24fccd97-ac62-4d86-971f-59e4fc780888/ovsdbserver-sb/0.log" Jan 26 11:51:57 crc kubenswrapper[4867]: I0126 11:51:57.932656 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-547bc4f4d-xs5kd_3fe54576-9f68-4335-9449-16f7af831e94/placement-api/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.029135 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-547bc4f4d-xs5kd_3fe54576-9f68-4335-9449-16f7af831e94/placement-log/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.179632 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_abd304f6-b024-40c9-86cb-94c9e9620ec0/setup-container/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.307693 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_abd304f6-b024-40c9-86cb-94c9e9620ec0/setup-container/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.321121 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_abd304f6-b024-40c9-86cb-94c9e9620ec0/rabbitmq/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.480390 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d0d380ac-2d87-4632-a7e3-d201296043f4/setup-container/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.779052 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d0d380ac-2d87-4632-a7e3-d201296043f4/setup-container/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.790694 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d0d380ac-2d87-4632-a7e3-d201296043f4/rabbitmq/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.824278 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5668f68b6c-7674j_39829bfc-df9a-4123-a069-f99e3032615d/proxy-httpd/0.log" Jan 26 11:51:58 crc kubenswrapper[4867]: I0126 11:51:58.970964 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-s8jqh_c491453c-4aa8-458a-8ee3-42475e7678f4/swift-ring-rebalance/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.017489 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5668f68b6c-7674j_39829bfc-df9a-4123-a069-f99e3032615d/proxy-server/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.214726 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-auditor/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.216682 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-reaper/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.301667 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-replicator/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.352033 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-server/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.442982 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-auditor/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.488248 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-replicator/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.541663 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-server/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.605927 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-updater/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.703150 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-auditor/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.716803 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-expirer/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.755139 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-replicator/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.825326 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-server/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.914594 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-updater/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.962458 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/swift-recon-cron/0.log" Jan 26 11:51:59 crc kubenswrapper[4867]: I0126 11:51:59.986845 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/rsync/0.log" Jan 26 11:52:04 crc kubenswrapper[4867]: I0126 11:52:04.384588 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eb361900-eda0-4cb4-8838-4267b465353b/memcached/0.log" Jan 26 11:52:06 crc kubenswrapper[4867]: I0126 11:52:06.294299 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:52:06 crc kubenswrapper[4867]: I0126 11:52:06.294617 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:52:06 crc kubenswrapper[4867]: I0126 11:52:06.294668 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:52:06 crc kubenswrapper[4867]: I0126 11:52:06.295712 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da5f1f9c98d3acd70884c452982ea6128d73027a10995d07ccd0b36f768b7132"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:52:06 crc kubenswrapper[4867]: I0126 11:52:06.295767 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://da5f1f9c98d3acd70884c452982ea6128d73027a10995d07ccd0b36f768b7132" gracePeriod=600 Jan 26 11:52:07 crc kubenswrapper[4867]: I0126 11:52:07.363724 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="da5f1f9c98d3acd70884c452982ea6128d73027a10995d07ccd0b36f768b7132" exitCode=0 Jan 26 11:52:07 crc kubenswrapper[4867]: I0126 11:52:07.363789 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"da5f1f9c98d3acd70884c452982ea6128d73027a10995d07ccd0b36f768b7132"} Jan 26 11:52:07 crc kubenswrapper[4867]: I0126 11:52:07.364292 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64"} Jan 26 11:52:07 crc kubenswrapper[4867]: I0126 11:52:07.364320 4867 scope.go:117] "RemoveContainer" containerID="7857dfd24884ee7b3544dfd9117125dc690c467738f6ed4ca3bec8ebae8c755a" Jan 26 11:52:25 crc kubenswrapper[4867]: I0126 11:52:25.326690 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/util/0.log" Jan 26 11:52:25 crc kubenswrapper[4867]: I0126 11:52:25.527747 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/pull/0.log" Jan 26 11:52:25 crc kubenswrapper[4867]: I0126 11:52:25.542419 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/util/0.log" Jan 26 11:52:25 crc kubenswrapper[4867]: I0126 11:52:25.554994 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/pull/0.log" Jan 26 11:52:25 crc kubenswrapper[4867]: I0126 11:52:25.730834 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/util/0.log" Jan 26 11:52:25 crc kubenswrapper[4867]: I0126 11:52:25.786732 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/pull/0.log" Jan 26 11:52:25 crc kubenswrapper[4867]: I0126 11:52:25.879816 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/extract/0.log" Jan 26 11:52:26 crc kubenswrapper[4867]: I0126 11:52:26.290654 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-8w8hc_b1c6af74-51a5-45bb-afed-9b8b19a5c7df/manager/0.log" Jan 26 11:52:26 crc kubenswrapper[4867]: I0126 11:52:26.742137 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gh4fm_073c6f18-4275-4233-8308-39307e2cc0c7/manager/0.log" Jan 26 11:52:26 crc kubenswrapper[4867]: I0126 11:52:26.990780 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ccp9p_10ae2757-3e84-4ad1-8459-fca684db2964/manager/0.log" Jan 26 11:52:27 crc kubenswrapper[4867]: I0126 11:52:27.037901 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-gthnl_4f33548d-3a14-41f4-8447-feb86b7cf366/manager/0.log" Jan 26 11:52:27 crc kubenswrapper[4867]: I0126 11:52:27.057625 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-rgg4g_34c3c36b-d905-4349-8909-bd15951aca68/manager/0.log" Jan 26 11:52:27 crc kubenswrapper[4867]: I0126 11:52:27.166216 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pgqvv_5402225a-cbc7-4b7c-8036-9b8159baee31/manager/0.log" Jan 26 11:52:27 crc kubenswrapper[4867]: I0126 11:52:27.493827 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598d88d885-fjpln_242c7502-97f2-4ac9-96ba-17b04f96a5b5/manager/0.log" Jan 26 11:52:27 crc kubenswrapper[4867]: I0126 11:52:27.592351 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-758868c854-chnbm_1dce245d-cfd7-440a-9797-2e8c05641673/manager/0.log" Jan 26 11:52:27 crc kubenswrapper[4867]: I0126 11:52:27.788012 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-tzb4g_9da13f82-2fca-4922-8b27-b11d702897ff/manager/0.log" Jan 26 11:52:27 crc kubenswrapper[4867]: I0126 11:52:27.935927 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-5s6fg_3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3/manager/0.log" Jan 26 11:52:28 crc kubenswrapper[4867]: I0126 11:52:28.038472 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-khq8w_2034ae77-372d-473a-b038-83ee4c3720c0/manager/0.log" Jan 26 11:52:28 crc kubenswrapper[4867]: I0126 11:52:28.171665 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-wz989_c9a978c7-9efb-43dc-830c-31020be6121a/manager/0.log" Jan 26 11:52:28 crc kubenswrapper[4867]: I0126 11:52:28.375533 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-v4pfk_de2f9a68-7384-47b5-a16d-da28e04440de/manager/0.log" Jan 26 11:52:28 crc kubenswrapper[4867]: I0126 11:52:28.415945 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-z7djp_99737677-080c-4f1a-aa91-e5162fe5f25d/manager/0.log" Jan 26 11:52:28 crc kubenswrapper[4867]: I0126 11:52:28.592133 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2_b2b3db26-bd1e-4178-ad15-3fb849d16a6c/manager/0.log" Jan 26 11:52:28 crc kubenswrapper[4867]: I0126 11:52:28.764712 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-74894dff96-wh5tx_3392fcb6-70d9-46f0-954b-81e2cee79a72/operator/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.026319 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8swqj_c26c3c2d-f71f-4cef-ab83-6f69da85606a/registry-server/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.254578 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-rsv5q_bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975/manager/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.301063 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-jjlnx_829c6c7e-cc19-4f6d-a350-dea6f26f3436/manager/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.435359 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lcn9l_ccccb13a-d387-4515-83c6-ea24a070a12e/operator/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.514914 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-r7pf7_ee79b4ff-ed5f-4660-9d36-2fd0c1840f84/manager/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.653692 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d65646bb4-6hkx8_dc30069e-52ed-46a5-9dc9-4558c856149e/manager/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.719349 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-c7klk_10f19670-4fbf-42ee-b54c-5317af0b0c00/manager/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.739413 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-n6zwx_4009a85d-3728-420e-b7db-70f8b41587ff/manager/0.log" Jan 26 11:52:29 crc kubenswrapper[4867]: I0126 11:52:29.913842 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-df52v_799c2d45-a054-4971-a87e-ad3b620cb2c5/manager/0.log" Jan 26 11:52:48 crc kubenswrapper[4867]: I0126 11:52:48.105684 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6vjzt_702e97d5-258a-4ec8-bc8f-cc700c16f813/control-plane-machine-set-operator/0.log" Jan 26 11:52:48 crc kubenswrapper[4867]: I0126 11:52:48.283430 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pb5rg_b207fdfd-306c-4494-8c1f-560dd155cd7a/machine-api-operator/0.log" Jan 26 11:52:48 crc kubenswrapper[4867]: I0126 11:52:48.336820 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pb5rg_b207fdfd-306c-4494-8c1f-560dd155cd7a/kube-rbac-proxy/0.log" Jan 26 11:53:01 crc kubenswrapper[4867]: I0126 11:53:01.639104 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2k86r_a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a/cert-manager-controller/0.log" Jan 26 11:53:01 crc kubenswrapper[4867]: I0126 11:53:01.820949 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-tv8pv_5f8ac213-4e48-43fd-9cd3-47c1cf8102f2/cert-manager-cainjector/0.log" Jan 26 11:53:01 crc kubenswrapper[4867]: I0126 11:53:01.966319 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rptrs_71abfae8-23ae-4ab8-9840-8c34abcbac6a/cert-manager-webhook/0.log" Jan 26 11:53:15 crc kubenswrapper[4867]: I0126 11:53:15.426695 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-9mhpj_5496960a-d548-45d1-b1af-46a2019c8258/nmstate-console-plugin/0.log" Jan 26 11:53:15 crc kubenswrapper[4867]: I0126 11:53:15.596514 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jhqkb_b1ffa812-b614-4e1d-a243-bea92b55da60/nmstate-handler/0.log" Jan 26 11:53:15 crc kubenswrapper[4867]: I0126 11:53:15.662872 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9jgvk_2d4cf215-bd64-4e38-8e9b-ea2b90e36137/nmstate-metrics/0.log" Jan 26 11:53:15 crc kubenswrapper[4867]: I0126 11:53:15.703950 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9jgvk_2d4cf215-bd64-4e38-8e9b-ea2b90e36137/kube-rbac-proxy/0.log" Jan 26 11:53:15 crc kubenswrapper[4867]: I0126 11:53:15.870993 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wqlhb_5d46639d-9922-4557-a7f2-d40917695fef/nmstate-operator/0.log" Jan 26 11:53:15 crc kubenswrapper[4867]: I0126 11:53:15.959349 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-zttkf_72e3b4aa-81dd-4ae0-aa28-35c7092e98fd/nmstate-webhook/0.log" Jan 26 11:53:45 crc kubenswrapper[4867]: I0126 11:53:45.619012 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-496nf_6e82409c-e6fc-4a6b-964f-95fee3ed959d/kube-rbac-proxy/0.log" Jan 26 11:53:45 crc kubenswrapper[4867]: I0126 11:53:45.808111 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-496nf_6e82409c-e6fc-4a6b-964f-95fee3ed959d/controller/0.log" Jan 26 11:53:45 crc kubenswrapper[4867]: I0126 11:53:45.925643 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.003782 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.027424 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.062588 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.159490 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.317201 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.331047 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.342805 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.367566 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.539393 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.552136 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.554598 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.574434 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/controller/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.741661 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/kube-rbac-proxy-frr/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.750356 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/kube-rbac-proxy/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.797521 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/frr-metrics/0.log" Jan 26 11:53:46 crc kubenswrapper[4867]: I0126 11:53:46.998640 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/reloader/0.log" Jan 26 11:53:47 crc kubenswrapper[4867]: I0126 11:53:47.031096 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mtgxn_7d39a9a1-98f9-4404-a415-867570383af9/frr-k8s-webhook-server/0.log" Jan 26 11:53:47 crc kubenswrapper[4867]: I0126 11:53:47.238215 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b6879bdfc-xwrhn_e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4/manager/0.log" Jan 26 11:53:47 crc kubenswrapper[4867]: I0126 11:53:47.396453 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cbc548b4-c9cg5_0898e985-06ad-4cde-b358-75c0e395d72d/webhook-server/0.log" Jan 26 11:53:47 crc kubenswrapper[4867]: I0126 11:53:47.492582 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xzzx4_29fc757d-2542-48c5-bea3-05ff023baa05/kube-rbac-proxy/0.log" Jan 26 11:53:47 crc kubenswrapper[4867]: I0126 11:53:47.741130 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/frr/0.log" Jan 26 11:53:47 crc kubenswrapper[4867]: I0126 11:53:47.923063 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xzzx4_29fc757d-2542-48c5-bea3-05ff023baa05/speaker/0.log" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.508061 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x6wlc"] Jan 26 11:53:53 crc kubenswrapper[4867]: E0126 11:53:53.509158 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d350e2-48be-4870-8cad-c9485b5f152a" containerName="container-00" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.509176 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d350e2-48be-4870-8cad-c9485b5f152a" containerName="container-00" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.509491 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d350e2-48be-4870-8cad-c9485b5f152a" containerName="container-00" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.511282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.554675 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6wlc"] Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.591596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-utilities\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.591983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-catalog-content\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.592050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx2cd\" (UniqueName: \"kubernetes.io/projected/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-kube-api-access-nx2cd\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.693632 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx2cd\" (UniqueName: \"kubernetes.io/projected/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-kube-api-access-nx2cd\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.693714 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-utilities\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.693934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-catalog-content\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.694324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-utilities\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.694355 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-catalog-content\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.710635 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-htp5h"] Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.715021 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.757171 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-htp5h"] Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.765055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx2cd\" (UniqueName: \"kubernetes.io/projected/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-kube-api-access-nx2cd\") pod \"community-operators-x6wlc\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.795331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdx8d\" (UniqueName: \"kubernetes.io/projected/809aba51-b409-4d27-a41d-b3bb113ffafd-kube-api-access-pdx8d\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.795473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-utilities\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.795518 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-catalog-content\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.833761 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.904300 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-utilities\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.904400 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-catalog-content\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.904461 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdx8d\" (UniqueName: \"kubernetes.io/projected/809aba51-b409-4d27-a41d-b3bb113ffafd-kube-api-access-pdx8d\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.905553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-utilities\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.905768 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-catalog-content\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:53 crc kubenswrapper[4867]: I0126 11:53:53.934947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdx8d\" (UniqueName: \"kubernetes.io/projected/809aba51-b409-4d27-a41d-b3bb113ffafd-kube-api-access-pdx8d\") pod \"certified-operators-htp5h\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:54 crc kubenswrapper[4867]: I0126 11:53:54.045814 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:53:54 crc kubenswrapper[4867]: I0126 11:53:54.453948 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6wlc"] Jan 26 11:53:54 crc kubenswrapper[4867]: I0126 11:53:54.675511 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-htp5h"] Jan 26 11:53:54 crc kubenswrapper[4867]: W0126 11:53:54.678503 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod809aba51_b409_4d27_a41d_b3bb113ffafd.slice/crio-05cc4c4dbf3d7bcca46a94316cd01d1163cfcb5a3e5c4fd10e9168e9f531ed9b WatchSource:0}: Error finding container 05cc4c4dbf3d7bcca46a94316cd01d1163cfcb5a3e5c4fd10e9168e9f531ed9b: Status 404 returned error can't find the container with id 05cc4c4dbf3d7bcca46a94316cd01d1163cfcb5a3e5c4fd10e9168e9f531ed9b Jan 26 11:53:55 crc kubenswrapper[4867]: I0126 11:53:55.295795 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6wlc" event={"ID":"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454","Type":"ContainerStarted","Data":"02f70d377a76d58b5f9e1d68ce7f2d40f7d6d3505f49a5c96485108643327145"} Jan 26 11:53:55 crc kubenswrapper[4867]: I0126 11:53:55.297047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htp5h" event={"ID":"809aba51-b409-4d27-a41d-b3bb113ffafd","Type":"ContainerStarted","Data":"05cc4c4dbf3d7bcca46a94316cd01d1163cfcb5a3e5c4fd10e9168e9f531ed9b"} Jan 26 11:53:55 crc kubenswrapper[4867]: I0126 11:53:55.908866 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98rlz"] Jan 26 11:53:55 crc kubenswrapper[4867]: I0126 11:53:55.911653 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:55 crc kubenswrapper[4867]: I0126 11:53:55.918954 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98rlz"] Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.048328 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-catalog-content\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.048395 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmvf6\" (UniqueName: \"kubernetes.io/projected/3c99a879-00d9-42bd-8028-b04fef650b1a-kube-api-access-bmvf6\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.048423 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-utilities\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.150443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-catalog-content\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.150538 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmvf6\" (UniqueName: \"kubernetes.io/projected/3c99a879-00d9-42bd-8028-b04fef650b1a-kube-api-access-bmvf6\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.150573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-utilities\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.151027 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-catalog-content\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.151074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-utilities\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.168788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmvf6\" (UniqueName: \"kubernetes.io/projected/3c99a879-00d9-42bd-8028-b04fef650b1a-kube-api-access-bmvf6\") pod \"redhat-marketplace-98rlz\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.230667 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.325009 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerID="e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf" exitCode=0 Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.325295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6wlc" event={"ID":"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454","Type":"ContainerDied","Data":"e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf"} Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.334782 4867 generic.go:334] "Generic (PLEG): container finished" podID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerID="43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f" exitCode=0 Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.334825 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htp5h" event={"ID":"809aba51-b409-4d27-a41d-b3bb113ffafd","Type":"ContainerDied","Data":"43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f"} Jan 26 11:53:56 crc kubenswrapper[4867]: I0126 11:53:56.766184 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98rlz"] Jan 26 11:53:57 crc kubenswrapper[4867]: I0126 11:53:57.345653 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98rlz" event={"ID":"3c99a879-00d9-42bd-8028-b04fef650b1a","Type":"ContainerStarted","Data":"669e6c070a4629f5e5000f06de3676c52a4a3b2e9f46fd92ec3ae919bd6a612d"} Jan 26 11:53:58 crc kubenswrapper[4867]: I0126 11:53:58.360773 4867 generic.go:334] "Generic (PLEG): container finished" podID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerID="22b160d88a99e352d212fc8e703edbbe11733b37b02397d8dea9b01837fa67a2" exitCode=0 Jan 26 11:53:58 crc kubenswrapper[4867]: I0126 11:53:58.360847 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98rlz" event={"ID":"3c99a879-00d9-42bd-8028-b04fef650b1a","Type":"ContainerDied","Data":"22b160d88a99e352d212fc8e703edbbe11733b37b02397d8dea9b01837fa67a2"} Jan 26 11:53:58 crc kubenswrapper[4867]: I0126 11:53:58.364040 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerID="6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14" exitCode=0 Jan 26 11:53:58 crc kubenswrapper[4867]: I0126 11:53:58.364110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6wlc" event={"ID":"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454","Type":"ContainerDied","Data":"6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14"} Jan 26 11:53:58 crc kubenswrapper[4867]: I0126 11:53:58.368119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htp5h" event={"ID":"809aba51-b409-4d27-a41d-b3bb113ffafd","Type":"ContainerStarted","Data":"378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d"} Jan 26 11:53:59 crc kubenswrapper[4867]: I0126 11:53:59.378799 4867 generic.go:334] "Generic (PLEG): container finished" podID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerID="378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d" exitCode=0 Jan 26 11:53:59 crc kubenswrapper[4867]: I0126 11:53:59.378837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htp5h" event={"ID":"809aba51-b409-4d27-a41d-b3bb113ffafd","Type":"ContainerDied","Data":"378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d"} Jan 26 11:54:00 crc kubenswrapper[4867]: I0126 11:54:00.392606 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6wlc" event={"ID":"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454","Type":"ContainerStarted","Data":"dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45"} Jan 26 11:54:02 crc kubenswrapper[4867]: I0126 11:54:02.407406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htp5h" event={"ID":"809aba51-b409-4d27-a41d-b3bb113ffafd","Type":"ContainerStarted","Data":"36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310"} Jan 26 11:54:02 crc kubenswrapper[4867]: I0126 11:54:02.409176 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98rlz" event={"ID":"3c99a879-00d9-42bd-8028-b04fef650b1a","Type":"ContainerStarted","Data":"715494badab34053262d025f8ddcb296018231970b460d98f231f18112ecc2f9"} Jan 26 11:54:02 crc kubenswrapper[4867]: I0126 11:54:02.426544 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-htp5h" podStartSLOduration=4.045852975 podStartE2EDuration="9.426521474s" podCreationTimestamp="2026-01-26 11:53:53 +0000 UTC" firstStartedPulling="2026-01-26 11:53:56.33640405 +0000 UTC m=+2186.034978960" lastFinishedPulling="2026-01-26 11:54:01.717072549 +0000 UTC m=+2191.415647459" observedRunningTime="2026-01-26 11:54:02.424542529 +0000 UTC m=+2192.123117449" watchObservedRunningTime="2026-01-26 11:54:02.426521474 +0000 UTC m=+2192.125096384" Jan 26 11:54:02 crc kubenswrapper[4867]: I0126 11:54:02.427539 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x6wlc" podStartSLOduration=6.93475299 podStartE2EDuration="9.427534121s" podCreationTimestamp="2026-01-26 11:53:53 +0000 UTC" firstStartedPulling="2026-01-26 11:53:56.329472401 +0000 UTC m=+2186.028047311" lastFinishedPulling="2026-01-26 11:53:58.822253532 +0000 UTC m=+2188.520828442" observedRunningTime="2026-01-26 11:54:00.412703773 +0000 UTC m=+2190.111278703" watchObservedRunningTime="2026-01-26 11:54:02.427534121 +0000 UTC m=+2192.126109031" Jan 26 11:54:02 crc kubenswrapper[4867]: I0126 11:54:02.618183 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/util/0.log" Jan 26 11:54:02 crc kubenswrapper[4867]: I0126 11:54:02.899399 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/util/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.050123 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/pull/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.050176 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/pull/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.162617 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/pull/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.268494 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/extract/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.317320 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/util/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.419594 4867 generic.go:334] "Generic (PLEG): container finished" podID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerID="715494badab34053262d025f8ddcb296018231970b460d98f231f18112ecc2f9" exitCode=0 Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.419716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98rlz" event={"ID":"3c99a879-00d9-42bd-8028-b04fef650b1a","Type":"ContainerDied","Data":"715494badab34053262d025f8ddcb296018231970b460d98f231f18112ecc2f9"} Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.439958 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/util/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.643443 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/util/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.651184 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/pull/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.654971 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/pull/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.835454 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.835886 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.878390 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/extract/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.889977 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.922274 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/util/0.log" Jan 26 11:54:03 crc kubenswrapper[4867]: I0126 11:54:03.971981 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/pull/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.047836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.047917 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.095577 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-htp5h_809aba51-b409-4d27-a41d-b3bb113ffafd/extract-utilities/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.113300 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.324754 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-htp5h_809aba51-b409-4d27-a41d-b3bb113ffafd/extract-utilities/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.367473 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-htp5h_809aba51-b409-4d27-a41d-b3bb113ffafd/extract-content/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.370309 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-htp5h_809aba51-b409-4d27-a41d-b3bb113ffafd/extract-content/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.437491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98rlz" event={"ID":"3c99a879-00d9-42bd-8028-b04fef650b1a","Type":"ContainerStarted","Data":"3f2639a316912c03c6378ddeb3318b36e6e760d12b0ae3d59124a00f7377b7f5"} Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.458610 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98rlz" podStartSLOduration=3.876783993 podStartE2EDuration="9.458583164s" podCreationTimestamp="2026-01-26 11:53:55 +0000 UTC" firstStartedPulling="2026-01-26 11:53:58.362506236 +0000 UTC m=+2188.061081146" lastFinishedPulling="2026-01-26 11:54:03.944305407 +0000 UTC m=+2193.642880317" observedRunningTime="2026-01-26 11:54:04.456354683 +0000 UTC m=+2194.154929593" watchObservedRunningTime="2026-01-26 11:54:04.458583164 +0000 UTC m=+2194.157158074" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.515384 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.596766 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-htp5h_809aba51-b409-4d27-a41d-b3bb113ffafd/extract-utilities/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.667014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-htp5h_809aba51-b409-4d27-a41d-b3bb113ffafd/registry-server/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.720695 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-htp5h_809aba51-b409-4d27-a41d-b3bb113ffafd/extract-content/0.log" Jan 26 11:54:04 crc kubenswrapper[4867]: I0126 11:54:04.835530 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-utilities/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.067592 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-utilities/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.105823 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-content/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.120415 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-content/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.282515 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-utilities/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.315423 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-content/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.450268 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-utilities/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.661860 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-content/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.690278 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-utilities/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.719905 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-content/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.859444 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-utilities/0.log" Jan 26 11:54:05 crc kubenswrapper[4867]: I0126 11:54:05.909998 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-content/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.090628 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6wlc_f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/extract-utilities/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.230825 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.230874 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.294041 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.294104 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.375657 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6wlc_f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/extract-utilities/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.420508 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6wlc_f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/extract-content/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.600662 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6wlc_f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/extract-content/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.811488 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/registry-server/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.877813 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6wlc_f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/extract-utilities/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.899122 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6wlc"] Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.904315 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/registry-server/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.911540 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6wlc_f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/registry-server/0.log" Jan 26 11:54:06 crc kubenswrapper[4867]: I0126 11:54:06.934788 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6wlc_f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/extract-content/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.087156 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f7s2h_d30c958f-102e-4d3f-a3e1-853ad02e7bfe/marketplace-operator/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.099856 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-98rlz_3c99a879-00d9-42bd-8028-b04fef650b1a/extract-utilities/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.282423 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-98rlz" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="registry-server" probeResult="failure" output=< Jan 26 11:54:07 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Jan 26 11:54:07 crc kubenswrapper[4867]: > Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.325556 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-98rlz_3c99a879-00d9-42bd-8028-b04fef650b1a/extract-utilities/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.344293 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-98rlz_3c99a879-00d9-42bd-8028-b04fef650b1a/extract-content/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.356395 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-98rlz_3c99a879-00d9-42bd-8028-b04fef650b1a/extract-content/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.461166 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x6wlc" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="registry-server" containerID="cri-o://dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45" gracePeriod=2 Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.550794 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-98rlz_3c99a879-00d9-42bd-8028-b04fef650b1a/extract-content/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.584111 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-utilities/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.623357 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-98rlz_3c99a879-00d9-42bd-8028-b04fef650b1a/registry-server/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.623889 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-98rlz_3c99a879-00d9-42bd-8028-b04fef650b1a/extract-utilities/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.942628 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-content/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.966941 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-content/0.log" Jan 26 11:54:07 crc kubenswrapper[4867]: I0126 11:54:07.983455 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-utilities/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.005791 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.099311 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-utilities\") pod \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.099366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-catalog-content\") pod \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.099387 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx2cd\" (UniqueName: \"kubernetes.io/projected/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-kube-api-access-nx2cd\") pod \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\" (UID: \"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454\") " Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.100236 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-utilities" (OuterVolumeSpecName: "utilities") pod "f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" (UID: "f4391e0f-22f1-48fc-b1d4-e9faf7c8a454"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.100960 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.121464 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-kube-api-access-nx2cd" (OuterVolumeSpecName: "kube-api-access-nx2cd") pod "f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" (UID: "f4391e0f-22f1-48fc-b1d4-e9faf7c8a454"). InnerVolumeSpecName "kube-api-access-nx2cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.173133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" (UID: "f4391e0f-22f1-48fc-b1d4-e9faf7c8a454"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.203305 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.203350 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx2cd\" (UniqueName: \"kubernetes.io/projected/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454-kube-api-access-nx2cd\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.260517 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6pt2_15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/extract-utilities/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.273051 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-utilities/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.312613 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-content/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.387171 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/registry-server/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.472526 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerID="dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45" exitCode=0 Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.472570 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6wlc" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.472581 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6wlc" event={"ID":"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454","Type":"ContainerDied","Data":"dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45"} Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.472616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6wlc" event={"ID":"f4391e0f-22f1-48fc-b1d4-e9faf7c8a454","Type":"ContainerDied","Data":"02f70d377a76d58b5f9e1d68ce7f2d40f7d6d3505f49a5c96485108643327145"} Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.472658 4867 scope.go:117] "RemoveContainer" containerID="dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.500066 4867 scope.go:117] "RemoveContainer" containerID="6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.506920 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6wlc"] Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.516034 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x6wlc"] Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.526414 4867 scope.go:117] "RemoveContainer" containerID="e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.552988 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6pt2_15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/extract-content/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.591373 4867 scope.go:117] "RemoveContainer" containerID="dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45" Jan 26 11:54:08 crc kubenswrapper[4867]: E0126 11:54:08.592550 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45\": container with ID starting with dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45 not found: ID does not exist" containerID="dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.592605 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45"} err="failed to get container status \"dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45\": rpc error: code = NotFound desc = could not find container \"dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45\": container with ID starting with dcd71e52f1641247513333b35926404a29063c0d636fb77d8795cd29bf617e45 not found: ID does not exist" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.592632 4867 scope.go:117] "RemoveContainer" containerID="6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.596021 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" path="/var/lib/kubelet/pods/f4391e0f-22f1-48fc-b1d4-e9faf7c8a454/volumes" Jan 26 11:54:08 crc kubenswrapper[4867]: E0126 11:54:08.596733 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14\": container with ID starting with 6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14 not found: ID does not exist" containerID="6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.596756 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14"} err="failed to get container status \"6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14\": rpc error: code = NotFound desc = could not find container \"6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14\": container with ID starting with 6df40692e53c907eb4a5d2179f7b5432689c5710e4d2ed3318f4600f6482ad14 not found: ID does not exist" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.596772 4867 scope.go:117] "RemoveContainer" containerID="e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf" Jan 26 11:54:08 crc kubenswrapper[4867]: E0126 11:54:08.597506 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf\": container with ID starting with e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf not found: ID does not exist" containerID="e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.597528 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf"} err="failed to get container status \"e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf\": rpc error: code = NotFound desc = could not find container \"e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf\": container with ID starting with e80e84596016b7ed3641c777615aa4ca3e2fb239956f9b911b04da74c47bf3cf not found: ID does not exist" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.614156 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6pt2_15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/extract-utilities/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.659057 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6pt2_15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/extract-content/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.852891 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6pt2_15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/extract-utilities/0.log" Jan 26 11:54:08 crc kubenswrapper[4867]: I0126 11:54:08.896976 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6pt2_15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/extract-content/0.log" Jan 26 11:54:09 crc kubenswrapper[4867]: I0126 11:54:09.161809 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6pt2_15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/registry-server/0.log" Jan 26 11:54:14 crc kubenswrapper[4867]: I0126 11:54:14.105170 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:54:14 crc kubenswrapper[4867]: I0126 11:54:14.158485 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-htp5h"] Jan 26 11:54:14 crc kubenswrapper[4867]: I0126 11:54:14.519669 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-htp5h" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="registry-server" containerID="cri-o://36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310" gracePeriod=2 Jan 26 11:54:14 crc kubenswrapper[4867]: I0126 11:54:14.995569 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.038150 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-catalog-content\") pod \"809aba51-b409-4d27-a41d-b3bb113ffafd\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.038197 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-utilities\") pod \"809aba51-b409-4d27-a41d-b3bb113ffafd\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.038277 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdx8d\" (UniqueName: \"kubernetes.io/projected/809aba51-b409-4d27-a41d-b3bb113ffafd-kube-api-access-pdx8d\") pod \"809aba51-b409-4d27-a41d-b3bb113ffafd\" (UID: \"809aba51-b409-4d27-a41d-b3bb113ffafd\") " Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.039425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-utilities" (OuterVolumeSpecName: "utilities") pod "809aba51-b409-4d27-a41d-b3bb113ffafd" (UID: "809aba51-b409-4d27-a41d-b3bb113ffafd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.047411 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809aba51-b409-4d27-a41d-b3bb113ffafd-kube-api-access-pdx8d" (OuterVolumeSpecName: "kube-api-access-pdx8d") pod "809aba51-b409-4d27-a41d-b3bb113ffafd" (UID: "809aba51-b409-4d27-a41d-b3bb113ffafd"). InnerVolumeSpecName "kube-api-access-pdx8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.087417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "809aba51-b409-4d27-a41d-b3bb113ffafd" (UID: "809aba51-b409-4d27-a41d-b3bb113ffafd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.146553 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.147325 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809aba51-b409-4d27-a41d-b3bb113ffafd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.147360 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdx8d\" (UniqueName: \"kubernetes.io/projected/809aba51-b409-4d27-a41d-b3bb113ffafd-kube-api-access-pdx8d\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.529229 4867 generic.go:334] "Generic (PLEG): container finished" podID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerID="36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310" exitCode=0 Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.529287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htp5h" event={"ID":"809aba51-b409-4d27-a41d-b3bb113ffafd","Type":"ContainerDied","Data":"36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310"} Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.529328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htp5h" event={"ID":"809aba51-b409-4d27-a41d-b3bb113ffafd","Type":"ContainerDied","Data":"05cc4c4dbf3d7bcca46a94316cd01d1163cfcb5a3e5c4fd10e9168e9f531ed9b"} Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.529346 4867 scope.go:117] "RemoveContainer" containerID="36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.529349 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htp5h" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.552938 4867 scope.go:117] "RemoveContainer" containerID="378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.572850 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-htp5h"] Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.579202 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-htp5h"] Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.595344 4867 scope.go:117] "RemoveContainer" containerID="43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.656961 4867 scope.go:117] "RemoveContainer" containerID="36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310" Jan 26 11:54:15 crc kubenswrapper[4867]: E0126 11:54:15.658422 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310\": container with ID starting with 36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310 not found: ID does not exist" containerID="36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.658464 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310"} err="failed to get container status \"36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310\": rpc error: code = NotFound desc = could not find container \"36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310\": container with ID starting with 36105ad12e28ae0ce1b7660d194c49445a80e89358458b17c9216fa1711ad310 not found: ID does not exist" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.658495 4867 scope.go:117] "RemoveContainer" containerID="378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d" Jan 26 11:54:15 crc kubenswrapper[4867]: E0126 11:54:15.664545 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d\": container with ID starting with 378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d not found: ID does not exist" containerID="378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.664581 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d"} err="failed to get container status \"378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d\": rpc error: code = NotFound desc = could not find container \"378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d\": container with ID starting with 378b1f0e25c790d9281c0291a5aa682a663a612b9d9753efd37c772cab68fa4d not found: ID does not exist" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.664600 4867 scope.go:117] "RemoveContainer" containerID="43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f" Jan 26 11:54:15 crc kubenswrapper[4867]: E0126 11:54:15.664848 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f\": container with ID starting with 43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f not found: ID does not exist" containerID="43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f" Jan 26 11:54:15 crc kubenswrapper[4867]: I0126 11:54:15.664879 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f"} err="failed to get container status \"43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f\": rpc error: code = NotFound desc = could not find container \"43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f\": container with ID starting with 43058dbf4a586b0f54c6e4dd859a04ddb80a0934d38151247178f7fd9128d30f not found: ID does not exist" Jan 26 11:54:16 crc kubenswrapper[4867]: I0126 11:54:16.291907 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:54:16 crc kubenswrapper[4867]: I0126 11:54:16.345167 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:54:16 crc kubenswrapper[4867]: I0126 11:54:16.577322 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" path="/var/lib/kubelet/pods/809aba51-b409-4d27-a41d-b3bb113ffafd/volumes" Jan 26 11:54:18 crc kubenswrapper[4867]: I0126 11:54:18.547798 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98rlz"] Jan 26 11:54:18 crc kubenswrapper[4867]: I0126 11:54:18.548409 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98rlz" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="registry-server" containerID="cri-o://3f2639a316912c03c6378ddeb3318b36e6e760d12b0ae3d59124a00f7377b7f5" gracePeriod=2 Jan 26 11:54:20 crc kubenswrapper[4867]: I0126 11:54:20.577955 4867 generic.go:334] "Generic (PLEG): container finished" podID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerID="3f2639a316912c03c6378ddeb3318b36e6e760d12b0ae3d59124a00f7377b7f5" exitCode=0 Jan 26 11:54:20 crc kubenswrapper[4867]: I0126 11:54:20.578084 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98rlz" event={"ID":"3c99a879-00d9-42bd-8028-b04fef650b1a","Type":"ContainerDied","Data":"3f2639a316912c03c6378ddeb3318b36e6e760d12b0ae3d59124a00f7377b7f5"} Jan 26 11:54:20 crc kubenswrapper[4867]: I0126 11:54:20.971784 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.097864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmvf6\" (UniqueName: \"kubernetes.io/projected/3c99a879-00d9-42bd-8028-b04fef650b1a-kube-api-access-bmvf6\") pod \"3c99a879-00d9-42bd-8028-b04fef650b1a\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.098036 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-catalog-content\") pod \"3c99a879-00d9-42bd-8028-b04fef650b1a\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.098113 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-utilities\") pod \"3c99a879-00d9-42bd-8028-b04fef650b1a\" (UID: \"3c99a879-00d9-42bd-8028-b04fef650b1a\") " Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.099380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-utilities" (OuterVolumeSpecName: "utilities") pod "3c99a879-00d9-42bd-8028-b04fef650b1a" (UID: "3c99a879-00d9-42bd-8028-b04fef650b1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.114478 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c99a879-00d9-42bd-8028-b04fef650b1a-kube-api-access-bmvf6" (OuterVolumeSpecName: "kube-api-access-bmvf6") pod "3c99a879-00d9-42bd-8028-b04fef650b1a" (UID: "3c99a879-00d9-42bd-8028-b04fef650b1a"). InnerVolumeSpecName "kube-api-access-bmvf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.123414 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c99a879-00d9-42bd-8028-b04fef650b1a" (UID: "3c99a879-00d9-42bd-8028-b04fef650b1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.200386 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmvf6\" (UniqueName: \"kubernetes.io/projected/3c99a879-00d9-42bd-8028-b04fef650b1a-kube-api-access-bmvf6\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.200421 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.200433 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c99a879-00d9-42bd-8028-b04fef650b1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.592181 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98rlz" event={"ID":"3c99a879-00d9-42bd-8028-b04fef650b1a","Type":"ContainerDied","Data":"669e6c070a4629f5e5000f06de3676c52a4a3b2e9f46fd92ec3ae919bd6a612d"} Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.592253 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98rlz" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.592306 4867 scope.go:117] "RemoveContainer" containerID="3f2639a316912c03c6378ddeb3318b36e6e760d12b0ae3d59124a00f7377b7f5" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.627021 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98rlz"] Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.630649 4867 scope.go:117] "RemoveContainer" containerID="715494badab34053262d025f8ddcb296018231970b460d98f231f18112ecc2f9" Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.633132 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98rlz"] Jan 26 11:54:21 crc kubenswrapper[4867]: I0126 11:54:21.660990 4867 scope.go:117] "RemoveContainer" containerID="22b160d88a99e352d212fc8e703edbbe11733b37b02397d8dea9b01837fa67a2" Jan 26 11:54:22 crc kubenswrapper[4867]: I0126 11:54:22.582470 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" path="/var/lib/kubelet/pods/3c99a879-00d9-42bd-8028-b04fef650b1a/volumes" Jan 26 11:54:36 crc kubenswrapper[4867]: I0126 11:54:36.293681 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:54:36 crc kubenswrapper[4867]: I0126 11:54:36.294367 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:55:06 crc kubenswrapper[4867]: I0126 11:55:06.293510 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 11:55:06 crc kubenswrapper[4867]: I0126 11:55:06.294140 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 11:55:06 crc kubenswrapper[4867]: I0126 11:55:06.294199 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 11:55:06 crc kubenswrapper[4867]: I0126 11:55:06.295166 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 11:55:06 crc kubenswrapper[4867]: I0126 11:55:06.295287 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" gracePeriod=600 Jan 26 11:55:06 crc kubenswrapper[4867]: E0126 11:55:06.943410 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:55:07 crc kubenswrapper[4867]: I0126 11:55:07.052715 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" exitCode=0 Jan 26 11:55:07 crc kubenswrapper[4867]: I0126 11:55:07.052763 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64"} Jan 26 11:55:07 crc kubenswrapper[4867]: I0126 11:55:07.052793 4867 scope.go:117] "RemoveContainer" containerID="da5f1f9c98d3acd70884c452982ea6128d73027a10995d07ccd0b36f768b7132" Jan 26 11:55:07 crc kubenswrapper[4867]: I0126 11:55:07.053379 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:55:07 crc kubenswrapper[4867]: E0126 11:55:07.053595 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:55:20 crc kubenswrapper[4867]: I0126 11:55:20.564003 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:55:20 crc kubenswrapper[4867]: E0126 11:55:20.564726 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:55:31 crc kubenswrapper[4867]: I0126 11:55:31.572432 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:55:31 crc kubenswrapper[4867]: E0126 11:55:31.573376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:55:43 crc kubenswrapper[4867]: I0126 11:55:43.563869 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:55:43 crc kubenswrapper[4867]: E0126 11:55:43.565063 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:55:48 crc kubenswrapper[4867]: I0126 11:55:48.453787 4867 generic.go:334] "Generic (PLEG): container finished" podID="b957382d-a2ad-4564-89ab-9c009ca57825" containerID="204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb" exitCode=0 Jan 26 11:55:48 crc kubenswrapper[4867]: I0126 11:55:48.453952 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dcwln/must-gather-wvdxj" event={"ID":"b957382d-a2ad-4564-89ab-9c009ca57825","Type":"ContainerDied","Data":"204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb"} Jan 26 11:55:48 crc kubenswrapper[4867]: I0126 11:55:48.456341 4867 scope.go:117] "RemoveContainer" containerID="204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb" Jan 26 11:55:48 crc kubenswrapper[4867]: I0126 11:55:48.572074 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dcwln_must-gather-wvdxj_b957382d-a2ad-4564-89ab-9c009ca57825/gather/0.log" Jan 26 11:55:55 crc kubenswrapper[4867]: I0126 11:55:55.802677 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dcwln/must-gather-wvdxj"] Jan 26 11:55:55 crc kubenswrapper[4867]: I0126 11:55:55.803479 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dcwln/must-gather-wvdxj" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" containerName="copy" containerID="cri-o://016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24" gracePeriod=2 Jan 26 11:55:55 crc kubenswrapper[4867]: I0126 11:55:55.815836 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dcwln/must-gather-wvdxj"] Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.234577 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dcwln_must-gather-wvdxj_b957382d-a2ad-4564-89ab-9c009ca57825/copy/0.log" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.234991 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.412108 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqbsv\" (UniqueName: \"kubernetes.io/projected/b957382d-a2ad-4564-89ab-9c009ca57825-kube-api-access-rqbsv\") pod \"b957382d-a2ad-4564-89ab-9c009ca57825\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.412621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b957382d-a2ad-4564-89ab-9c009ca57825-must-gather-output\") pod \"b957382d-a2ad-4564-89ab-9c009ca57825\" (UID: \"b957382d-a2ad-4564-89ab-9c009ca57825\") " Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.418685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b957382d-a2ad-4564-89ab-9c009ca57825-kube-api-access-rqbsv" (OuterVolumeSpecName: "kube-api-access-rqbsv") pod "b957382d-a2ad-4564-89ab-9c009ca57825" (UID: "b957382d-a2ad-4564-89ab-9c009ca57825"). InnerVolumeSpecName "kube-api-access-rqbsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.514893 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqbsv\" (UniqueName: \"kubernetes.io/projected/b957382d-a2ad-4564-89ab-9c009ca57825-kube-api-access-rqbsv\") on node \"crc\" DevicePath \"\"" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.538645 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dcwln_must-gather-wvdxj_b957382d-a2ad-4564-89ab-9c009ca57825/copy/0.log" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.539306 4867 generic.go:334] "Generic (PLEG): container finished" podID="b957382d-a2ad-4564-89ab-9c009ca57825" containerID="016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24" exitCode=143 Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.539365 4867 scope.go:117] "RemoveContainer" containerID="016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.539369 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dcwln/must-gather-wvdxj" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.555909 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b957382d-a2ad-4564-89ab-9c009ca57825-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b957382d-a2ad-4564-89ab-9c009ca57825" (UID: "b957382d-a2ad-4564-89ab-9c009ca57825"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.566401 4867 scope.go:117] "RemoveContainer" containerID="204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.601833 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" path="/var/lib/kubelet/pods/b957382d-a2ad-4564-89ab-9c009ca57825/volumes" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.617116 4867 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b957382d-a2ad-4564-89ab-9c009ca57825-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.636633 4867 scope.go:117] "RemoveContainer" containerID="016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24" Jan 26 11:55:56 crc kubenswrapper[4867]: E0126 11:55:56.637304 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24\": container with ID starting with 016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24 not found: ID does not exist" containerID="016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.637350 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24"} err="failed to get container status \"016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24\": rpc error: code = NotFound desc = could not find container \"016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24\": container with ID starting with 016ae920305757369e89f43ee0f025e51a92c2336c0c0789e828175d31cd2d24 not found: ID does not exist" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.637379 4867 scope.go:117] "RemoveContainer" containerID="204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb" Jan 26 11:55:56 crc kubenswrapper[4867]: E0126 11:55:56.637777 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb\": container with ID starting with 204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb not found: ID does not exist" containerID="204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb" Jan 26 11:55:56 crc kubenswrapper[4867]: I0126 11:55:56.637817 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb"} err="failed to get container status \"204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb\": rpc error: code = NotFound desc = could not find container \"204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb\": container with ID starting with 204261f5e3c375091d2ef89c3d2856559b0809f29865e0a14fd155507a3baceb not found: ID does not exist" Jan 26 11:55:58 crc kubenswrapper[4867]: I0126 11:55:58.564571 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:55:58 crc kubenswrapper[4867]: E0126 11:55:58.565758 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:56:10 crc kubenswrapper[4867]: I0126 11:56:10.571503 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:56:10 crc kubenswrapper[4867]: E0126 11:56:10.572399 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:56:25 crc kubenswrapper[4867]: I0126 11:56:25.563947 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:56:25 crc kubenswrapper[4867]: E0126 11:56:25.564732 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:56:40 crc kubenswrapper[4867]: I0126 11:56:40.570954 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:56:40 crc kubenswrapper[4867]: E0126 11:56:40.572307 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:56:55 crc kubenswrapper[4867]: I0126 11:56:55.563800 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:56:55 crc kubenswrapper[4867]: E0126 11:56:55.564746 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:57:07 crc kubenswrapper[4867]: I0126 11:57:07.564343 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:57:07 crc kubenswrapper[4867]: E0126 11:57:07.565453 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:57:22 crc kubenswrapper[4867]: I0126 11:57:22.564336 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:57:22 crc kubenswrapper[4867]: E0126 11:57:22.565115 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:57:37 crc kubenswrapper[4867]: I0126 11:57:37.564434 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:57:37 crc kubenswrapper[4867]: E0126 11:57:37.565192 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:57:48 crc kubenswrapper[4867]: I0126 11:57:48.564551 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:57:48 crc kubenswrapper[4867]: E0126 11:57:48.565444 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:58:03 crc kubenswrapper[4867]: I0126 11:58:03.564096 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:58:03 crc kubenswrapper[4867]: E0126 11:58:03.564966 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:58:14 crc kubenswrapper[4867]: I0126 11:58:14.564860 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:58:14 crc kubenswrapper[4867]: E0126 11:58:14.565803 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:58:26 crc kubenswrapper[4867]: I0126 11:58:26.564296 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:58:26 crc kubenswrapper[4867]: E0126 11:58:26.565064 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:58:41 crc kubenswrapper[4867]: I0126 11:58:41.564170 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:58:41 crc kubenswrapper[4867]: E0126 11:58:41.565488 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.408934 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-drwwh/must-gather-cv25j"] Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410169 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="extract-utilities" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410190 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="extract-utilities" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410205 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410213 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410245 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410253 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410271 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="extract-content" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410278 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="extract-content" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410293 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="extract-content" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410299 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="extract-content" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410316 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410324 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410339 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" containerName="copy" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410346 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" containerName="copy" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410359 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="extract-utilities" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410366 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="extract-utilities" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410382 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="extract-content" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410389 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="extract-content" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410404 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="extract-utilities" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410411 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="extract-utilities" Jan 26 11:58:47 crc kubenswrapper[4867]: E0126 11:58:47.410434 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" containerName="gather" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410441 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" containerName="gather" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410717 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" containerName="copy" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410745 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b957382d-a2ad-4564-89ab-9c009ca57825" containerName="gather" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410761 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4391e0f-22f1-48fc-b1d4-e9faf7c8a454" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410783 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="809aba51-b409-4d27-a41d-b3bb113ffafd" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.410796 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c99a879-00d9-42bd-8028-b04fef650b1a" containerName="registry-server" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.412069 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.414416 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-drwwh"/"openshift-service-ca.crt" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.414531 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-drwwh"/"kube-root-ca.crt" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.439966 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-drwwh/must-gather-cv25j"] Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.580656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-must-gather-output\") pod \"must-gather-cv25j\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.580790 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkr69\" (UniqueName: \"kubernetes.io/projected/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-kube-api-access-zkr69\") pod \"must-gather-cv25j\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.683075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-must-gather-output\") pod \"must-gather-cv25j\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.683260 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkr69\" (UniqueName: \"kubernetes.io/projected/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-kube-api-access-zkr69\") pod \"must-gather-cv25j\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.683668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-must-gather-output\") pod \"must-gather-cv25j\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.711707 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkr69\" (UniqueName: \"kubernetes.io/projected/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-kube-api-access-zkr69\") pod \"must-gather-cv25j\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:47 crc kubenswrapper[4867]: I0126 11:58:47.735865 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 11:58:48 crc kubenswrapper[4867]: I0126 11:58:48.275620 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-drwwh/must-gather-cv25j"] Jan 26 11:58:49 crc kubenswrapper[4867]: I0126 11:58:49.077529 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/must-gather-cv25j" event={"ID":"bd819386-7af9-4fe3-b59d-b70bfa3cfac3","Type":"ContainerStarted","Data":"89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf"} Jan 26 11:58:49 crc kubenswrapper[4867]: I0126 11:58:49.078175 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/must-gather-cv25j" event={"ID":"bd819386-7af9-4fe3-b59d-b70bfa3cfac3","Type":"ContainerStarted","Data":"54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da"} Jan 26 11:58:49 crc kubenswrapper[4867]: I0126 11:58:49.078235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/must-gather-cv25j" event={"ID":"bd819386-7af9-4fe3-b59d-b70bfa3cfac3","Type":"ContainerStarted","Data":"a770c68562059389d26894b62e3876cd3064dca16a1a5180af84ecd0fe0ce347"} Jan 26 11:58:49 crc kubenswrapper[4867]: I0126 11:58:49.099001 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-drwwh/must-gather-cv25j" podStartSLOduration=2.098975982 podStartE2EDuration="2.098975982s" podCreationTimestamp="2026-01-26 11:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:58:49.09672627 +0000 UTC m=+2478.795301180" watchObservedRunningTime="2026-01-26 11:58:49.098975982 +0000 UTC m=+2478.797550892" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.014417 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-drwwh/crc-debug-kjrj5"] Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.016181 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.018518 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-drwwh"/"default-dockercfg-djz4r" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.175684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjctd\" (UniqueName: \"kubernetes.io/projected/229438e1-a2fb-461e-834d-c1a0250c4596-kube-api-access-pjctd\") pod \"crc-debug-kjrj5\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.175796 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/229438e1-a2fb-461e-834d-c1a0250c4596-host\") pod \"crc-debug-kjrj5\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.277143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjctd\" (UniqueName: \"kubernetes.io/projected/229438e1-a2fb-461e-834d-c1a0250c4596-kube-api-access-pjctd\") pod \"crc-debug-kjrj5\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.277233 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/229438e1-a2fb-461e-834d-c1a0250c4596-host\") pod \"crc-debug-kjrj5\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.277302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/229438e1-a2fb-461e-834d-c1a0250c4596-host\") pod \"crc-debug-kjrj5\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.296957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjctd\" (UniqueName: \"kubernetes.io/projected/229438e1-a2fb-461e-834d-c1a0250c4596-kube-api-access-pjctd\") pod \"crc-debug-kjrj5\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:52 crc kubenswrapper[4867]: I0126 11:58:52.337527 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:58:53 crc kubenswrapper[4867]: I0126 11:58:53.110656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" event={"ID":"229438e1-a2fb-461e-834d-c1a0250c4596","Type":"ContainerStarted","Data":"7201e6f91582cacc2e9dd770e0f58b52a50e3986db8e1c8080473bad8917d9e0"} Jan 26 11:58:53 crc kubenswrapper[4867]: I0126 11:58:53.111185 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" event={"ID":"229438e1-a2fb-461e-834d-c1a0250c4596","Type":"ContainerStarted","Data":"a173ff598fc2e3e5d3c582936e76154a4785c7288996fa5acc5941a1f0715b28"} Jan 26 11:58:53 crc kubenswrapper[4867]: I0126 11:58:53.137150 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" podStartSLOduration=1.1371289980000001 podStartE2EDuration="1.137128998s" podCreationTimestamp="2026-01-26 11:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 11:58:53.131081853 +0000 UTC m=+2482.829656763" watchObservedRunningTime="2026-01-26 11:58:53.137128998 +0000 UTC m=+2482.835703908" Jan 26 11:58:56 crc kubenswrapper[4867]: I0126 11:58:56.564256 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:58:56 crc kubenswrapper[4867]: E0126 11:58:56.566560 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:59:09 crc kubenswrapper[4867]: I0126 11:59:09.563960 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:59:09 crc kubenswrapper[4867]: E0126 11:59:09.564759 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:59:21 crc kubenswrapper[4867]: I0126 11:59:21.564169 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:59:21 crc kubenswrapper[4867]: E0126 11:59:21.564852 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:59:27 crc kubenswrapper[4867]: I0126 11:59:27.393662 4867 generic.go:334] "Generic (PLEG): container finished" podID="229438e1-a2fb-461e-834d-c1a0250c4596" containerID="7201e6f91582cacc2e9dd770e0f58b52a50e3986db8e1c8080473bad8917d9e0" exitCode=0 Jan 26 11:59:27 crc kubenswrapper[4867]: I0126 11:59:27.393866 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" event={"ID":"229438e1-a2fb-461e-834d-c1a0250c4596","Type":"ContainerDied","Data":"7201e6f91582cacc2e9dd770e0f58b52a50e3986db8e1c8080473bad8917d9e0"} Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.542491 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.583815 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-drwwh/crc-debug-kjrj5"] Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.595834 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-drwwh/crc-debug-kjrj5"] Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.684207 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjctd\" (UniqueName: \"kubernetes.io/projected/229438e1-a2fb-461e-834d-c1a0250c4596-kube-api-access-pjctd\") pod \"229438e1-a2fb-461e-834d-c1a0250c4596\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.684523 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/229438e1-a2fb-461e-834d-c1a0250c4596-host\") pod \"229438e1-a2fb-461e-834d-c1a0250c4596\" (UID: \"229438e1-a2fb-461e-834d-c1a0250c4596\") " Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.684595 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/229438e1-a2fb-461e-834d-c1a0250c4596-host" (OuterVolumeSpecName: "host") pod "229438e1-a2fb-461e-834d-c1a0250c4596" (UID: "229438e1-a2fb-461e-834d-c1a0250c4596"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.685077 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/229438e1-a2fb-461e-834d-c1a0250c4596-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.694972 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229438e1-a2fb-461e-834d-c1a0250c4596-kube-api-access-pjctd" (OuterVolumeSpecName: "kube-api-access-pjctd") pod "229438e1-a2fb-461e-834d-c1a0250c4596" (UID: "229438e1-a2fb-461e-834d-c1a0250c4596"). InnerVolumeSpecName "kube-api-access-pjctd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:59:28 crc kubenswrapper[4867]: I0126 11:59:28.787158 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjctd\" (UniqueName: \"kubernetes.io/projected/229438e1-a2fb-461e-834d-c1a0250c4596-kube-api-access-pjctd\") on node \"crc\" DevicePath \"\"" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.410554 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a173ff598fc2e3e5d3c582936e76154a4785c7288996fa5acc5941a1f0715b28" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.410870 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-kjrj5" Jan 26 11:59:29 crc kubenswrapper[4867]: E0126 11:59:29.497483 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229438e1_a2fb_461e_834d_c1a0250c4596.slice/crio-a173ff598fc2e3e5d3c582936e76154a4785c7288996fa5acc5941a1f0715b28\": RecentStats: unable to find data in memory cache]" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.817100 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-drwwh/crc-debug-sf52k"] Jan 26 11:59:29 crc kubenswrapper[4867]: E0126 11:59:29.817741 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229438e1-a2fb-461e-834d-c1a0250c4596" containerName="container-00" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.817755 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="229438e1-a2fb-461e-834d-c1a0250c4596" containerName="container-00" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.817960 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="229438e1-a2fb-461e-834d-c1a0250c4596" containerName="container-00" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.818575 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.820832 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-drwwh"/"default-dockercfg-djz4r" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.906064 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgc9\" (UniqueName: \"kubernetes.io/projected/5d3f5863-3449-4564-aa82-024fc255d3bc-kube-api-access-7lgc9\") pod \"crc-debug-sf52k\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:29 crc kubenswrapper[4867]: I0126 11:59:29.906313 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d3f5863-3449-4564-aa82-024fc255d3bc-host\") pod \"crc-debug-sf52k\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.008221 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d3f5863-3449-4564-aa82-024fc255d3bc-host\") pod \"crc-debug-sf52k\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.008367 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lgc9\" (UniqueName: \"kubernetes.io/projected/5d3f5863-3449-4564-aa82-024fc255d3bc-kube-api-access-7lgc9\") pod \"crc-debug-sf52k\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.008386 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d3f5863-3449-4564-aa82-024fc255d3bc-host\") pod \"crc-debug-sf52k\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.046960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lgc9\" (UniqueName: \"kubernetes.io/projected/5d3f5863-3449-4564-aa82-024fc255d3bc-kube-api-access-7lgc9\") pod \"crc-debug-sf52k\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.138077 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.423516 4867 generic.go:334] "Generic (PLEG): container finished" podID="5d3f5863-3449-4564-aa82-024fc255d3bc" containerID="37c2ed4d39ed9dfe94a45a1614d065fb3c8d9d512e81e7c33a60e9971adb6153" exitCode=0 Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.423618 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/crc-debug-sf52k" event={"ID":"5d3f5863-3449-4564-aa82-024fc255d3bc","Type":"ContainerDied","Data":"37c2ed4d39ed9dfe94a45a1614d065fb3c8d9d512e81e7c33a60e9971adb6153"} Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.424422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/crc-debug-sf52k" event={"ID":"5d3f5863-3449-4564-aa82-024fc255d3bc","Type":"ContainerStarted","Data":"8c60898da5cb264af7630bf6a8e3b8b64a4607bb38c5e845a03ef45029942469"} Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.578450 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229438e1-a2fb-461e-834d-c1a0250c4596" path="/var/lib/kubelet/pods/229438e1-a2fb-461e-834d-c1a0250c4596/volumes" Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.871026 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-drwwh/crc-debug-sf52k"] Jan 26 11:59:30 crc kubenswrapper[4867]: I0126 11:59:30.879756 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-drwwh/crc-debug-sf52k"] Jan 26 11:59:31 crc kubenswrapper[4867]: I0126 11:59:31.524459 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:31 crc kubenswrapper[4867]: I0126 11:59:31.644814 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d3f5863-3449-4564-aa82-024fc255d3bc-host\") pod \"5d3f5863-3449-4564-aa82-024fc255d3bc\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " Jan 26 11:59:31 crc kubenswrapper[4867]: I0126 11:59:31.644927 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d3f5863-3449-4564-aa82-024fc255d3bc-host" (OuterVolumeSpecName: "host") pod "5d3f5863-3449-4564-aa82-024fc255d3bc" (UID: "5d3f5863-3449-4564-aa82-024fc255d3bc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:59:31 crc kubenswrapper[4867]: I0126 11:59:31.644988 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lgc9\" (UniqueName: \"kubernetes.io/projected/5d3f5863-3449-4564-aa82-024fc255d3bc-kube-api-access-7lgc9\") pod \"5d3f5863-3449-4564-aa82-024fc255d3bc\" (UID: \"5d3f5863-3449-4564-aa82-024fc255d3bc\") " Jan 26 11:59:31 crc kubenswrapper[4867]: I0126 11:59:31.645514 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d3f5863-3449-4564-aa82-024fc255d3bc-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:59:31 crc kubenswrapper[4867]: I0126 11:59:31.651737 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3f5863-3449-4564-aa82-024fc255d3bc-kube-api-access-7lgc9" (OuterVolumeSpecName: "kube-api-access-7lgc9") pod "5d3f5863-3449-4564-aa82-024fc255d3bc" (UID: "5d3f5863-3449-4564-aa82-024fc255d3bc"). InnerVolumeSpecName "kube-api-access-7lgc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:59:31 crc kubenswrapper[4867]: I0126 11:59:31.746985 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lgc9\" (UniqueName: \"kubernetes.io/projected/5d3f5863-3449-4564-aa82-024fc255d3bc-kube-api-access-7lgc9\") on node \"crc\" DevicePath \"\"" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.083497 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-drwwh/crc-debug-c2b5z"] Jan 26 11:59:32 crc kubenswrapper[4867]: E0126 11:59:32.084180 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3f5863-3449-4564-aa82-024fc255d3bc" containerName="container-00" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.084194 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3f5863-3449-4564-aa82-024fc255d3bc" containerName="container-00" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.084401 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3f5863-3449-4564-aa82-024fc255d3bc" containerName="container-00" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.085052 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.256467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrw4w\" (UniqueName: \"kubernetes.io/projected/e19725f8-d2da-42c5-8cae-61c354ea0a50-kube-api-access-lrw4w\") pod \"crc-debug-c2b5z\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.256573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e19725f8-d2da-42c5-8cae-61c354ea0a50-host\") pod \"crc-debug-c2b5z\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.359006 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrw4w\" (UniqueName: \"kubernetes.io/projected/e19725f8-d2da-42c5-8cae-61c354ea0a50-kube-api-access-lrw4w\") pod \"crc-debug-c2b5z\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.359087 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e19725f8-d2da-42c5-8cae-61c354ea0a50-host\") pod \"crc-debug-c2b5z\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.359253 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e19725f8-d2da-42c5-8cae-61c354ea0a50-host\") pod \"crc-debug-c2b5z\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.381435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrw4w\" (UniqueName: \"kubernetes.io/projected/e19725f8-d2da-42c5-8cae-61c354ea0a50-kube-api-access-lrw4w\") pod \"crc-debug-c2b5z\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.399706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.443711 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c60898da5cb264af7630bf6a8e3b8b64a4607bb38c5e845a03ef45029942469" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.443782 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-sf52k" Jan 26 11:59:32 crc kubenswrapper[4867]: I0126 11:59:32.578575 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3f5863-3449-4564-aa82-024fc255d3bc" path="/var/lib/kubelet/pods/5d3f5863-3449-4564-aa82-024fc255d3bc/volumes" Jan 26 11:59:33 crc kubenswrapper[4867]: I0126 11:59:33.456720 4867 generic.go:334] "Generic (PLEG): container finished" podID="e19725f8-d2da-42c5-8cae-61c354ea0a50" containerID="fbaa8b68a8c15933595e3fc756dae229a72629a75bff105e007517e6911e67fd" exitCode=0 Jan 26 11:59:33 crc kubenswrapper[4867]: I0126 11:59:33.457068 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/crc-debug-c2b5z" event={"ID":"e19725f8-d2da-42c5-8cae-61c354ea0a50","Type":"ContainerDied","Data":"fbaa8b68a8c15933595e3fc756dae229a72629a75bff105e007517e6911e67fd"} Jan 26 11:59:33 crc kubenswrapper[4867]: I0126 11:59:33.457108 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/crc-debug-c2b5z" event={"ID":"e19725f8-d2da-42c5-8cae-61c354ea0a50","Type":"ContainerStarted","Data":"54a2cd4afc9571358bae3a66c4f11bab4be3a06a57ac004e970d9e4ba81702a0"} Jan 26 11:59:33 crc kubenswrapper[4867]: I0126 11:59:33.526273 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-drwwh/crc-debug-c2b5z"] Jan 26 11:59:33 crc kubenswrapper[4867]: I0126 11:59:33.540753 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-drwwh/crc-debug-c2b5z"] Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.559130 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.564667 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:59:34 crc kubenswrapper[4867]: E0126 11:59:34.565120 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.708674 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrw4w\" (UniqueName: \"kubernetes.io/projected/e19725f8-d2da-42c5-8cae-61c354ea0a50-kube-api-access-lrw4w\") pod \"e19725f8-d2da-42c5-8cae-61c354ea0a50\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.708947 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e19725f8-d2da-42c5-8cae-61c354ea0a50-host\") pod \"e19725f8-d2da-42c5-8cae-61c354ea0a50\" (UID: \"e19725f8-d2da-42c5-8cae-61c354ea0a50\") " Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.709105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e19725f8-d2da-42c5-8cae-61c354ea0a50-host" (OuterVolumeSpecName: "host") pod "e19725f8-d2da-42c5-8cae-61c354ea0a50" (UID: "e19725f8-d2da-42c5-8cae-61c354ea0a50"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.709547 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e19725f8-d2da-42c5-8cae-61c354ea0a50-host\") on node \"crc\" DevicePath \"\"" Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.720445 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19725f8-d2da-42c5-8cae-61c354ea0a50-kube-api-access-lrw4w" (OuterVolumeSpecName: "kube-api-access-lrw4w") pod "e19725f8-d2da-42c5-8cae-61c354ea0a50" (UID: "e19725f8-d2da-42c5-8cae-61c354ea0a50"). InnerVolumeSpecName "kube-api-access-lrw4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 11:59:34 crc kubenswrapper[4867]: I0126 11:59:34.816080 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrw4w\" (UniqueName: \"kubernetes.io/projected/e19725f8-d2da-42c5-8cae-61c354ea0a50-kube-api-access-lrw4w\") on node \"crc\" DevicePath \"\"" Jan 26 11:59:35 crc kubenswrapper[4867]: I0126 11:59:35.478482 4867 scope.go:117] "RemoveContainer" containerID="fbaa8b68a8c15933595e3fc756dae229a72629a75bff105e007517e6911e67fd" Jan 26 11:59:35 crc kubenswrapper[4867]: I0126 11:59:35.478539 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/crc-debug-c2b5z" Jan 26 11:59:36 crc kubenswrapper[4867]: I0126 11:59:36.575117 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19725f8-d2da-42c5-8cae-61c354ea0a50" path="/var/lib/kubelet/pods/e19725f8-d2da-42c5-8cae-61c354ea0a50/volumes" Jan 26 11:59:45 crc kubenswrapper[4867]: I0126 11:59:45.564347 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:59:45 crc kubenswrapper[4867]: E0126 11:59:45.565118 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:59:57 crc kubenswrapper[4867]: I0126 11:59:57.456620 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f76cb8bb6-g4zck_340554a1-e56a-4b1b-aff3-d0c0e1ac210d/barbican-api/0.log" Jan 26 11:59:57 crc kubenswrapper[4867]: I0126 11:59:57.564609 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 11:59:57 crc kubenswrapper[4867]: E0126 11:59:57.564873 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g6cth_openshift-machine-config-operator(115cad9f-057f-4e63-b408-8fa7a358a191)\"" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" Jan 26 11:59:57 crc kubenswrapper[4867]: I0126 11:59:57.644861 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f76cb8bb6-g4zck_340554a1-e56a-4b1b-aff3-d0c0e1ac210d/barbican-api-log/0.log" Jan 26 11:59:57 crc kubenswrapper[4867]: I0126 11:59:57.704467 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fc6c76976-2w9dm_9a534f97-8d45-4418-af77-5e19e2013a0b/barbican-keystone-listener/0.log" Jan 26 11:59:57 crc kubenswrapper[4867]: I0126 11:59:57.849055 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fc6c76976-2w9dm_9a534f97-8d45-4418-af77-5e19e2013a0b/barbican-keystone-listener-log/0.log" Jan 26 11:59:57 crc kubenswrapper[4867]: I0126 11:59:57.896904 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6db8644655-m8sn6_f568d082-7794-4f60-b78e-bff0b6b6356f/barbican-worker/0.log" Jan 26 11:59:57 crc kubenswrapper[4867]: I0126 11:59:57.950785 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6db8644655-m8sn6_f568d082-7794-4f60-b78e-bff0b6b6356f/barbican-worker-log/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.078639 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/ceilometer-notification-agent/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.117175 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/ceilometer-central-agent/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.153110 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/proxy-httpd/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.260671 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e086a220-6ef2-4a71-8639-f75783c634e6/sg-core/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.345213 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4a9a8906-54d6-49c2-94c7-393167d8db56/cinder-api-log/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.352545 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4a9a8906-54d6-49c2-94c7-393167d8db56/cinder-api/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.555769 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75/probe/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.562615 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ed0b64b8-8e76-407e-8be8-c6b6cc5d4b75/cinder-scheduler/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.675084 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9fj8m_facba8bd-34c0-43a2-a31b-cc7a6ff17ba2/init/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.868061 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9fj8m_facba8bd-34c0-43a2-a31b-cc7a6ff17ba2/dnsmasq-dns/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.900498 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cd5cbd7b9-9fj8m_facba8bd-34c0-43a2-a31b-cc7a6ff17ba2/init/0.log" Jan 26 11:59:58 crc kubenswrapper[4867]: I0126 11:59:58.902268 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6fc7f66f-7989-42ac-a3c8-cd88b25f9c53/glance-httpd/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.034800 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6fc7f66f-7989-42ac-a3c8-cd88b25f9c53/glance-log/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.129909 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_58cc3b2f-c49e-4c16-9a26-342c8b2c8878/glance-httpd/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.138629 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_58cc3b2f-c49e-4c16-9a26-342c8b2c8878/glance-log/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.311691 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/init/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.475982 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/init/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.522895 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/ironic-api-log/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.602605 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5f459cfdcb-t5qhs_f114731c-0ed9-4d58-90f0-b670a856adf0/ironic-api/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.671966 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.867729 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.894383 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 11:59:59 crc kubenswrapper[4867]: I0126 11:59:59.933214 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.165997 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr"] Jan 26 12:00:00 crc kubenswrapper[4867]: E0126 12:00:00.166533 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19725f8-d2da-42c5-8cae-61c354ea0a50" containerName="container-00" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.166548 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19725f8-d2da-42c5-8cae-61c354ea0a50" containerName="container-00" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.167098 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19725f8-d2da-42c5-8cae-61c354ea0a50" containerName="container-00" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.169336 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.171810 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.172055 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.174206 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr"] Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.261136 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.298543 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.360654 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c1792b4-2931-45a7-b808-72dce84c6428-secret-volume\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.360745 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c1792b4-2931-45a7-b808-72dce84c6428-config-volume\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.360809 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfn5r\" (UniqueName: \"kubernetes.io/projected/6c1792b4-2931-45a7-b808-72dce84c6428-kube-api-access-vfn5r\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.461373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c1792b4-2931-45a7-b808-72dce84c6428-secret-volume\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.461453 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c1792b4-2931-45a7-b808-72dce84c6428-config-volume\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.461503 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfn5r\" (UniqueName: \"kubernetes.io/projected/6c1792b4-2931-45a7-b808-72dce84c6428-kube-api-access-vfn5r\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.468113 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c1792b4-2931-45a7-b808-72dce84c6428-secret-volume\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.472566 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c1792b4-2931-45a7-b808-72dce84c6428-config-volume\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.482884 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfn5r\" (UniqueName: \"kubernetes.io/projected/6c1792b4-2931-45a7-b808-72dce84c6428-kube-api-access-vfn5r\") pod \"collect-profiles-29490480-m46rr\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.520618 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.777897 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/init/0.log" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.969769 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-python-agent-init/0.log" Jan 26 12:00:00 crc kubenswrapper[4867]: I0126 12:00:00.991586 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr"] Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.365710 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.575187 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/httpboot/0.log" Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.720240 4867 generic.go:334] "Generic (PLEG): container finished" podID="6c1792b4-2931-45a7-b808-72dce84c6428" containerID="48549f90a0d10321281601d3363a868bc88811562aba37cf84e9af79dfb314c5" exitCode=0 Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.720286 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" event={"ID":"6c1792b4-2931-45a7-b808-72dce84c6428","Type":"ContainerDied","Data":"48549f90a0d10321281601d3363a868bc88811562aba37cf84e9af79dfb314c5"} Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.720314 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" event={"ID":"6c1792b4-2931-45a7-b808-72dce84c6428","Type":"ContainerStarted","Data":"8084ecd96af862423894a80465e88b1bb1d08c489d6f88dcf1ea157a30f518c9"} Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.751756 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ramdisk-logs/0.log" Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.769114 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 12:00:01 crc kubenswrapper[4867]: I0126 12:00:01.776603 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/ironic-conductor/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.034813 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-h7r88_3de6837e-5965-48ce-9967-2d259829ad4a/init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.057489 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.237849 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-h7r88_3de6837e-5965-48ce-9967-2d259829ad4a/ironic-db-sync/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.251406 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-h7r88_3de6837e-5965-48ce-9967-2d259829ad4a/init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.275469 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_1a985fff-3d59-40fa-9cae-fd0f2cc9de70/pxe-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.305152 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-python-agent-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.499795 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-pxe-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.516776 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-python-agent-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.545000 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-pxe-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.674604 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-httpboot/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.703494 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/inspector-pxe-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.757203 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-python-agent-init/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.759916 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector/1.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.782742 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector/2.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.934572 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector-httpd/0.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.953347 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ironic-inspector-httpd/1.log" Jan 26 12:00:02 crc kubenswrapper[4867]: I0126 12:00:02.977528 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_6e49ec18-452c-47df-a0c9-ea52cdced830/ramdisk-logs/0.log" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.000270 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-256sm_586082ca-8462-421f-940d-25a9e1a9e945/ironic-inspector-db-sync/0.log" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.086901 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.201925 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-795fb7c76b-9ndwh_a2167905-2856-4125-81fd-a2430fe558f9/ironic-neutron-agent/4.log" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.206924 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-795fb7c76b-9ndwh_a2167905-2856-4125-81fd-a2430fe558f9/ironic-neutron-agent/3.log" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.210830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfn5r\" (UniqueName: \"kubernetes.io/projected/6c1792b4-2931-45a7-b808-72dce84c6428-kube-api-access-vfn5r\") pod \"6c1792b4-2931-45a7-b808-72dce84c6428\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.210872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c1792b4-2931-45a7-b808-72dce84c6428-config-volume\") pod \"6c1792b4-2931-45a7-b808-72dce84c6428\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.210963 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c1792b4-2931-45a7-b808-72dce84c6428-secret-volume\") pod \"6c1792b4-2931-45a7-b808-72dce84c6428\" (UID: \"6c1792b4-2931-45a7-b808-72dce84c6428\") " Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.211716 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1792b4-2931-45a7-b808-72dce84c6428-config-volume" (OuterVolumeSpecName: "config-volume") pod "6c1792b4-2931-45a7-b808-72dce84c6428" (UID: "6c1792b4-2931-45a7-b808-72dce84c6428"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.217357 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1792b4-2931-45a7-b808-72dce84c6428-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6c1792b4-2931-45a7-b808-72dce84c6428" (UID: "6c1792b4-2931-45a7-b808-72dce84c6428"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.217437 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1792b4-2931-45a7-b808-72dce84c6428-kube-api-access-vfn5r" (OuterVolumeSpecName: "kube-api-access-vfn5r") pod "6c1792b4-2931-45a7-b808-72dce84c6428" (UID: "6c1792b4-2931-45a7-b808-72dce84c6428"). InnerVolumeSpecName "kube-api-access-vfn5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.312741 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c1792b4-2931-45a7-b808-72dce84c6428-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.312774 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfn5r\" (UniqueName: \"kubernetes.io/projected/6c1792b4-2931-45a7-b808-72dce84c6428-kube-api-access-vfn5r\") on node \"crc\" DevicePath \"\"" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.312784 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c1792b4-2931-45a7-b808-72dce84c6428-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.421689 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f94776d6f-8b6q4_6fa27242-a46c-4987-9e2f-1f9d48b370e7/keystone-api/0.log" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.461553 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cd1d027e-98b3-4c45-981e-a60ad4cb8748/kube-state-metrics/0.log" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.712077 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-647b685f9-49zj6_ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9/neutron-httpd/0.log" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.738650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" event={"ID":"6c1792b4-2931-45a7-b808-72dce84c6428","Type":"ContainerDied","Data":"8084ecd96af862423894a80465e88b1bb1d08c489d6f88dcf1ea157a30f518c9"} Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.738690 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8084ecd96af862423894a80465e88b1bb1d08c489d6f88dcf1ea157a30f518c9" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.738863 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490480-m46rr" Jan 26 12:00:03 crc kubenswrapper[4867]: I0126 12:00:03.822423 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-647b685f9-49zj6_ebfa4a8c-62e9-489e-a7f6-d3f2ce2d05d9/neutron-api/0.log" Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.032320 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_738787a7-6f5f-48f1-8c43-ce02e88eb732/nova-api-log/0.log" Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.160545 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn"] Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.180514 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490435-gd8xn"] Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.246130 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8ad13a23-f9ee-40f2-aa88-3940ced23279/nova-cell0-conductor-conductor/0.log" Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.254576 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_738787a7-6f5f-48f1-8c43-ce02e88eb732/nova-api-api/0.log" Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.412148 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_20519d4e-b9eb-43b2-b2fb-ac40a9bea288/nova-cell1-conductor-conductor/0.log" Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.585913 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c6d7e3-5fb6-4242-b616-2628ca519c8e" path="/var/lib/kubelet/pods/64c6d7e3-5fb6-4242-b616-2628ca519c8e/volumes" Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.596139 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_dc6a54c4-4229-4157-a5a0-a2089d6a7131/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 12:00:04 crc kubenswrapper[4867]: I0126 12:00:04.755110 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7/nova-metadata-log/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.035030 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b092f3d9-f7f5-49d0-98d6-f0e7aff2d64a/nova-scheduler-scheduler/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.107571 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd3b4566-15b8-4c50-bc5e-76c5a6907311/mysql-bootstrap/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.328205 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd3b4566-15b8-4c50-bc5e-76c5a6907311/mysql-bootstrap/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.340532 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd3b4566-15b8-4c50-bc5e-76c5a6907311/galera/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.365342 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5a86e7e7-648c-4e32-8f5d-3fd90c24f1b7/nova-metadata-metadata/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.500140 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9305cd67-bbb5-45e9-ab35-6a34a717dff8/mysql-bootstrap/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.828429 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9305cd67-bbb5-45e9-ab35-6a34a717dff8/mysql-bootstrap/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.838298 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9305cd67-bbb5-45e9-ab35-6a34a717dff8/galera/0.log" Jan 26 12:00:05 crc kubenswrapper[4867]: I0126 12:00:05.840870 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0dba3b09-195d-416a-b4af-7f252c8abd0d/openstackclient/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.047607 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wsrcd_515623f1-c4bb-4522-ab0d-00138e1d0d0d/openstack-network-exporter/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.053231 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hbpxr_db65f713-855b-4ca7-b989-ebde989474ce/ovn-controller/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.273145 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovsdb-server-init/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.440090 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovsdb-server/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.458015 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovs-vswitchd/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.488463 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4f5h4_211a1bec-4387-4bbf-a034-56dd9396676d/ovsdb-server-init/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.792654 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2b5a7e41-130f-46be-8c94-a5ecaf39bb2c/openstack-network-exporter/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.852134 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2b5a7e41-130f-46be-8c94-a5ecaf39bb2c/ovn-northd/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.900677 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_28f25dc5-093b-4b0a-b1fa-290241e9bccc/openstack-network-exporter/0.log" Jan 26 12:00:06 crc kubenswrapper[4867]: I0126 12:00:06.983429 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_28f25dc5-093b-4b0a-b1fa-290241e9bccc/ovsdbserver-nb/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.064338 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_24fccd97-ac62-4d86-971f-59e4fc780888/openstack-network-exporter/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.101355 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_24fccd97-ac62-4d86-971f-59e4fc780888/ovsdbserver-sb/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.346756 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-547bc4f4d-xs5kd_3fe54576-9f68-4335-9449-16f7af831e94/placement-api/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.381986 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-547bc4f4d-xs5kd_3fe54576-9f68-4335-9449-16f7af831e94/placement-log/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.477626 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_abd304f6-b024-40c9-86cb-94c9e9620ec0/setup-container/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.716766 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_abd304f6-b024-40c9-86cb-94c9e9620ec0/rabbitmq/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.731125 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_abd304f6-b024-40c9-86cb-94c9e9620ec0/setup-container/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.768781 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d0d380ac-2d87-4632-a7e3-d201296043f4/setup-container/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.995885 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d0d380ac-2d87-4632-a7e3-d201296043f4/rabbitmq/0.log" Jan 26 12:00:07 crc kubenswrapper[4867]: I0126 12:00:07.999203 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d0d380ac-2d87-4632-a7e3-d201296043f4/setup-container/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.149216 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5668f68b6c-7674j_39829bfc-df9a-4123-a069-f99e3032615d/proxy-httpd/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.228151 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5668f68b6c-7674j_39829bfc-df9a-4123-a069-f99e3032615d/proxy-server/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.286697 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-s8jqh_c491453c-4aa8-458a-8ee3-42475e7678f4/swift-ring-rebalance/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.428552 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-auditor/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.464604 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-reaper/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.540569 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-replicator/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.563909 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.657037 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/account-server/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.691551 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-auditor/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.759664 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-replicator/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.798139 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-server/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.865382 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/container-updater/0.log" Jan 26 12:00:08 crc kubenswrapper[4867]: I0126 12:00:08.931834 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-auditor/0.log" Jan 26 12:00:09 crc kubenswrapper[4867]: I0126 12:00:09.005305 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-replicator/0.log" Jan 26 12:00:09 crc kubenswrapper[4867]: I0126 12:00:09.086911 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-expirer/0.log" Jan 26 12:00:09 crc kubenswrapper[4867]: I0126 12:00:09.112522 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-server/0.log" Jan 26 12:00:09 crc kubenswrapper[4867]: I0126 12:00:09.181274 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/object-updater/0.log" Jan 26 12:00:09 crc kubenswrapper[4867]: I0126 12:00:09.256995 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/rsync/0.log" Jan 26 12:00:09 crc kubenswrapper[4867]: I0126 12:00:09.286630 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3f128154-6619-4556-be1b-73e44d4f7df1/swift-recon-cron/0.log" Jan 26 12:00:09 crc kubenswrapper[4867]: I0126 12:00:09.805927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"c8c591e2190a01878a2643916d15100018088284afda1447941615892cd3e504"} Jan 26 12:00:15 crc kubenswrapper[4867]: I0126 12:00:15.123919 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eb361900-eda0-4cb4-8838-4267b465353b/memcached/0.log" Jan 26 12:00:33 crc kubenswrapper[4867]: I0126 12:00:33.400680 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/util/0.log" Jan 26 12:00:33 crc kubenswrapper[4867]: I0126 12:00:33.664035 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/util/0.log" Jan 26 12:00:33 crc kubenswrapper[4867]: I0126 12:00:33.681007 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/pull/0.log" Jan 26 12:00:33 crc kubenswrapper[4867]: I0126 12:00:33.712864 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/pull/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.050487 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/pull/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.086988 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/util/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.143556 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_72df59a55f851b4970af82627fba49d1b5f2202043ac4017c1bc725d0cktqsd_81541a17-1078-4ebe-b702-4d95a4ae8771/extract/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.310243 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-rgg4g_34c3c36b-d905-4349-8909-bd15951aca68/manager/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.399833 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ccp9p_10ae2757-3e84-4ad1-8459-fca684db2964/manager/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.535180 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-8w8hc_b1c6af74-51a5-45bb-afed-9b8b19a5c7df/manager/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.684853 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-gthnl_4f33548d-3a14-41f4-8447-feb86b7cf366/manager/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.765943 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gh4fm_073c6f18-4275-4233-8308-39307e2cc0c7/manager/0.log" Jan 26 12:00:34 crc kubenswrapper[4867]: I0126 12:00:34.933908 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-pgqvv_5402225a-cbc7-4b7c-8036-9b8159baee31/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.225401 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598d88d885-fjpln_242c7502-97f2-4ac9-96ba-17b04f96a5b5/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.259673 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-758868c854-chnbm_1dce245d-cfd7-440a-9797-2e8c05641673/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.379178 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-tzb4g_9da13f82-2fca-4922-8b27-b11d702897ff/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.460069 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-5s6fg_3eb62ea0-8291-49ec-aa8d-cb40ba93ecc3/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.601927 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-khq8w_2034ae77-372d-473a-b038-83ee4c3720c0/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.727185 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-wz989_c9a978c7-9efb-43dc-830c-31020be6121a/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.930275 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-v4pfk_de2f9a68-7384-47b5-a16d-da28e04440de/manager/0.log" Jan 26 12:00:35 crc kubenswrapper[4867]: I0126 12:00:35.961860 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-z7djp_99737677-080c-4f1a-aa91-e5162fe5f25d/manager/0.log" Jan 26 12:00:36 crc kubenswrapper[4867]: I0126 12:00:36.130716 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549zcg2_b2b3db26-bd1e-4178-ad15-3fb849d16a6c/manager/0.log" Jan 26 12:00:36 crc kubenswrapper[4867]: I0126 12:00:36.303059 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-74894dff96-wh5tx_3392fcb6-70d9-46f0-954b-81e2cee79a72/operator/0.log" Jan 26 12:00:36 crc kubenswrapper[4867]: I0126 12:00:36.533328 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8swqj_c26c3c2d-f71f-4cef-ab83-6f69da85606a/registry-server/0.log" Jan 26 12:00:36 crc kubenswrapper[4867]: I0126 12:00:36.816563 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-jjlnx_829c6c7e-cc19-4f6d-a350-dea6f26f3436/manager/0.log" Jan 26 12:00:36 crc kubenswrapper[4867]: I0126 12:00:36.838757 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-rsv5q_bb8ed5d8-1a97-4cc9-bf29-99b29c6a1975/manager/0.log" Jan 26 12:00:37 crc kubenswrapper[4867]: I0126 12:00:37.085348 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lcn9l_ccccb13a-d387-4515-83c6-ea24a070a12e/operator/0.log" Jan 26 12:00:37 crc kubenswrapper[4867]: I0126 12:00:37.214870 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d65646bb4-6hkx8_dc30069e-52ed-46a5-9dc9-4558c856149e/manager/0.log" Jan 26 12:00:37 crc kubenswrapper[4867]: I0126 12:00:37.318155 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-r7pf7_ee79b4ff-ed5f-4660-9d36-2fd0c1840f84/manager/0.log" Jan 26 12:00:37 crc kubenswrapper[4867]: I0126 12:00:37.507423 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-c7klk_10f19670-4fbf-42ee-b54c-5317af0b0c00/manager/0.log" Jan 26 12:00:37 crc kubenswrapper[4867]: I0126 12:00:37.560133 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-n6zwx_4009a85d-3728-420e-b7db-70f8b41587ff/manager/0.log" Jan 26 12:00:37 crc kubenswrapper[4867]: I0126 12:00:37.661040 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-df52v_799c2d45-a054-4971-a87e-ad3b620cb2c5/manager/0.log" Jan 26 12:00:38 crc kubenswrapper[4867]: I0126 12:00:38.847527 4867 scope.go:117] "RemoveContainer" containerID="172d104e20afa961a17964f863eab3270fbfc0738a332d7f23c11d134a0ffdbe" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.416003 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6vjzt_702e97d5-258a-4ec8-bc8f-cc700c16f813/control-plane-machine-set-operator/0.log" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.579741 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pb5rg_b207fdfd-306c-4494-8c1f-560dd155cd7a/kube-rbac-proxy/0.log" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.652868 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pb5rg_b207fdfd-306c-4494-8c1f-560dd155cd7a/machine-api-operator/0.log" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.865994 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g2p8j"] Jan 26 12:00:58 crc kubenswrapper[4867]: E0126 12:00:58.866574 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1792b4-2931-45a7-b808-72dce84c6428" containerName="collect-profiles" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.866600 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1792b4-2931-45a7-b808-72dce84c6428" containerName="collect-profiles" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.866868 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1792b4-2931-45a7-b808-72dce84c6428" containerName="collect-profiles" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.868877 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:58 crc kubenswrapper[4867]: I0126 12:00:58.878105 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2p8j"] Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.011118 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28baae48-3e74-4b8a-83c4-c0ef13bac140-catalog-content\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.011169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28baae48-3e74-4b8a-83c4-c0ef13bac140-utilities\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.011461 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wn4\" (UniqueName: \"kubernetes.io/projected/28baae48-3e74-4b8a-83c4-c0ef13bac140-kube-api-access-42wn4\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.113249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42wn4\" (UniqueName: \"kubernetes.io/projected/28baae48-3e74-4b8a-83c4-c0ef13bac140-kube-api-access-42wn4\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.113465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28baae48-3e74-4b8a-83c4-c0ef13bac140-catalog-content\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.113495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28baae48-3e74-4b8a-83c4-c0ef13bac140-utilities\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.114020 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28baae48-3e74-4b8a-83c4-c0ef13bac140-utilities\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.114133 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28baae48-3e74-4b8a-83c4-c0ef13bac140-catalog-content\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.139111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42wn4\" (UniqueName: \"kubernetes.io/projected/28baae48-3e74-4b8a-83c4-c0ef13bac140-kube-api-access-42wn4\") pod \"redhat-operators-g2p8j\" (UID: \"28baae48-3e74-4b8a-83c4-c0ef13bac140\") " pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.201864 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:00:59 crc kubenswrapper[4867]: I0126 12:00:59.677538 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2p8j"] Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.154785 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490481-rxbkf"] Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.156579 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.190026 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490481-rxbkf"] Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.262184 4867 generic.go:334] "Generic (PLEG): container finished" podID="28baae48-3e74-4b8a-83c4-c0ef13bac140" containerID="f0892825de7ea99bd61bd6ba4bcd7f31d4534ee5f59e3caf86acd6a31f05668a" exitCode=0 Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.262239 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2p8j" event={"ID":"28baae48-3e74-4b8a-83c4-c0ef13bac140","Type":"ContainerDied","Data":"f0892825de7ea99bd61bd6ba4bcd7f31d4534ee5f59e3caf86acd6a31f05668a"} Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.262265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2p8j" event={"ID":"28baae48-3e74-4b8a-83c4-c0ef13bac140","Type":"ContainerStarted","Data":"43bfd2d24c4d06283bd25e6328a48217642d6c8229f0654a0d5c7984fbf2f893"} Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.264034 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.347671 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-fernet-keys\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.347747 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-combined-ca-bundle\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.347780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjzk\" (UniqueName: \"kubernetes.io/projected/5a269552-b711-4f3f-85c7-3e603612cd21-kube-api-access-lpjzk\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.347853 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-config-data\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.449796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-fernet-keys\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.449936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-combined-ca-bundle\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.450004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjzk\" (UniqueName: \"kubernetes.io/projected/5a269552-b711-4f3f-85c7-3e603612cd21-kube-api-access-lpjzk\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.451590 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-config-data\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.458842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-fernet-keys\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.459770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-config-data\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.467138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-combined-ca-bundle\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.469004 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjzk\" (UniqueName: \"kubernetes.io/projected/5a269552-b711-4f3f-85c7-3e603612cd21-kube-api-access-lpjzk\") pod \"keystone-cron-29490481-rxbkf\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.497300 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:00 crc kubenswrapper[4867]: I0126 12:01:00.961657 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490481-rxbkf"] Jan 26 12:01:00 crc kubenswrapper[4867]: W0126 12:01:00.972240 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a269552_b711_4f3f_85c7_3e603612cd21.slice/crio-f37ab6f76105208ac01aa0e438e667736a9ed350bd9a6f93e14b718fd32c4eaa WatchSource:0}: Error finding container f37ab6f76105208ac01aa0e438e667736a9ed350bd9a6f93e14b718fd32c4eaa: Status 404 returned error can't find the container with id f37ab6f76105208ac01aa0e438e667736a9ed350bd9a6f93e14b718fd32c4eaa Jan 26 12:01:01 crc kubenswrapper[4867]: I0126 12:01:01.271717 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-rxbkf" event={"ID":"5a269552-b711-4f3f-85c7-3e603612cd21","Type":"ContainerStarted","Data":"a541b9e91a0932ef7c89f41b858f206b73de50e0c137d043ca5a8dc35d26a4fb"} Jan 26 12:01:01 crc kubenswrapper[4867]: I0126 12:01:01.272021 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-rxbkf" event={"ID":"5a269552-b711-4f3f-85c7-3e603612cd21","Type":"ContainerStarted","Data":"f37ab6f76105208ac01aa0e438e667736a9ed350bd9a6f93e14b718fd32c4eaa"} Jan 26 12:01:01 crc kubenswrapper[4867]: I0126 12:01:01.290452 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490481-rxbkf" podStartSLOduration=1.290428793 podStartE2EDuration="1.290428793s" podCreationTimestamp="2026-01-26 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 12:01:01.285545641 +0000 UTC m=+2610.984120551" watchObservedRunningTime="2026-01-26 12:01:01.290428793 +0000 UTC m=+2610.989003703" Jan 26 12:01:03 crc kubenswrapper[4867]: I0126 12:01:03.294042 4867 generic.go:334] "Generic (PLEG): container finished" podID="5a269552-b711-4f3f-85c7-3e603612cd21" containerID="a541b9e91a0932ef7c89f41b858f206b73de50e0c137d043ca5a8dc35d26a4fb" exitCode=0 Jan 26 12:01:03 crc kubenswrapper[4867]: I0126 12:01:03.294696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-rxbkf" event={"ID":"5a269552-b711-4f3f-85c7-3e603612cd21","Type":"ContainerDied","Data":"a541b9e91a0932ef7c89f41b858f206b73de50e0c137d043ca5a8dc35d26a4fb"} Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.623862 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.735829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-config-data\") pod \"5a269552-b711-4f3f-85c7-3e603612cd21\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.736073 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpjzk\" (UniqueName: \"kubernetes.io/projected/5a269552-b711-4f3f-85c7-3e603612cd21-kube-api-access-lpjzk\") pod \"5a269552-b711-4f3f-85c7-3e603612cd21\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.736114 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-fernet-keys\") pod \"5a269552-b711-4f3f-85c7-3e603612cd21\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.736166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-combined-ca-bundle\") pod \"5a269552-b711-4f3f-85c7-3e603612cd21\" (UID: \"5a269552-b711-4f3f-85c7-3e603612cd21\") " Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.741036 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5a269552-b711-4f3f-85c7-3e603612cd21" (UID: "5a269552-b711-4f3f-85c7-3e603612cd21"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.742355 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a269552-b711-4f3f-85c7-3e603612cd21-kube-api-access-lpjzk" (OuterVolumeSpecName: "kube-api-access-lpjzk") pod "5a269552-b711-4f3f-85c7-3e603612cd21" (UID: "5a269552-b711-4f3f-85c7-3e603612cd21"). InnerVolumeSpecName "kube-api-access-lpjzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.778611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a269552-b711-4f3f-85c7-3e603612cd21" (UID: "5a269552-b711-4f3f-85c7-3e603612cd21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.814604 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-config-data" (OuterVolumeSpecName: "config-data") pod "5a269552-b711-4f3f-85c7-3e603612cd21" (UID: "5a269552-b711-4f3f-85c7-3e603612cd21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.839097 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpjzk\" (UniqueName: \"kubernetes.io/projected/5a269552-b711-4f3f-85c7-3e603612cd21-kube-api-access-lpjzk\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.839130 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.839139 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:04 crc kubenswrapper[4867]: I0126 12:01:04.839208 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a269552-b711-4f3f-85c7-3e603612cd21-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:05 crc kubenswrapper[4867]: I0126 12:01:05.315648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490481-rxbkf" event={"ID":"5a269552-b711-4f3f-85c7-3e603612cd21","Type":"ContainerDied","Data":"f37ab6f76105208ac01aa0e438e667736a9ed350bd9a6f93e14b718fd32c4eaa"} Jan 26 12:01:05 crc kubenswrapper[4867]: I0126 12:01:05.315943 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37ab6f76105208ac01aa0e438e667736a9ed350bd9a6f93e14b718fd32c4eaa" Jan 26 12:01:05 crc kubenswrapper[4867]: I0126 12:01:05.315697 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490481-rxbkf" Jan 26 12:01:10 crc kubenswrapper[4867]: I0126 12:01:10.362081 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2p8j" event={"ID":"28baae48-3e74-4b8a-83c4-c0ef13bac140","Type":"ContainerStarted","Data":"c23bde29e3fb2dec410d1b17a63933555286c43dfc477eb6ad5762fffeb4d1c7"} Jan 26 12:01:13 crc kubenswrapper[4867]: I0126 12:01:13.161453 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2k86r_a1f3bf88-009a-4dc4-9e17-c3ab0ae08c6a/cert-manager-controller/0.log" Jan 26 12:01:13 crc kubenswrapper[4867]: I0126 12:01:13.260283 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-tv8pv_5f8ac213-4e48-43fd-9cd3-47c1cf8102f2/cert-manager-cainjector/0.log" Jan 26 12:01:13 crc kubenswrapper[4867]: I0126 12:01:13.363050 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rptrs_71abfae8-23ae-4ab8-9840-8c34abcbac6a/cert-manager-webhook/0.log" Jan 26 12:01:16 crc kubenswrapper[4867]: I0126 12:01:16.450759 4867 generic.go:334] "Generic (PLEG): container finished" podID="28baae48-3e74-4b8a-83c4-c0ef13bac140" containerID="c23bde29e3fb2dec410d1b17a63933555286c43dfc477eb6ad5762fffeb4d1c7" exitCode=0 Jan 26 12:01:16 crc kubenswrapper[4867]: I0126 12:01:16.450831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2p8j" event={"ID":"28baae48-3e74-4b8a-83c4-c0ef13bac140","Type":"ContainerDied","Data":"c23bde29e3fb2dec410d1b17a63933555286c43dfc477eb6ad5762fffeb4d1c7"} Jan 26 12:01:19 crc kubenswrapper[4867]: I0126 12:01:19.507533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2p8j" event={"ID":"28baae48-3e74-4b8a-83c4-c0ef13bac140","Type":"ContainerStarted","Data":"c0b3705c353a46f47948b51c5c4f08fe382a835ad15347e0cff3e28528e49607"} Jan 26 12:01:19 crc kubenswrapper[4867]: I0126 12:01:19.536360 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g2p8j" podStartSLOduration=3.1357605729999998 podStartE2EDuration="21.536339109s" podCreationTimestamp="2026-01-26 12:00:58 +0000 UTC" firstStartedPulling="2026-01-26 12:01:00.263837552 +0000 UTC m=+2609.962412462" lastFinishedPulling="2026-01-26 12:01:18.664416088 +0000 UTC m=+2628.362990998" observedRunningTime="2026-01-26 12:01:19.531351843 +0000 UTC m=+2629.229926753" watchObservedRunningTime="2026-01-26 12:01:19.536339109 +0000 UTC m=+2629.234914019" Jan 26 12:01:25 crc kubenswrapper[4867]: I0126 12:01:25.492052 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-9mhpj_5496960a-d548-45d1-b1af-46a2019c8258/nmstate-console-plugin/0.log" Jan 26 12:01:25 crc kubenswrapper[4867]: I0126 12:01:25.679528 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9jgvk_2d4cf215-bd64-4e38-8e9b-ea2b90e36137/kube-rbac-proxy/0.log" Jan 26 12:01:25 crc kubenswrapper[4867]: I0126 12:01:25.769969 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jhqkb_b1ffa812-b614-4e1d-a243-bea92b55da60/nmstate-handler/0.log" Jan 26 12:01:25 crc kubenswrapper[4867]: I0126 12:01:25.802641 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9jgvk_2d4cf215-bd64-4e38-8e9b-ea2b90e36137/nmstate-metrics/0.log" Jan 26 12:01:25 crc kubenswrapper[4867]: I0126 12:01:25.941500 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wqlhb_5d46639d-9922-4557-a7f2-d40917695fef/nmstate-operator/0.log" Jan 26 12:01:25 crc kubenswrapper[4867]: I0126 12:01:25.997371 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-zttkf_72e3b4aa-81dd-4ae0-aa28-35c7092e98fd/nmstate-webhook/0.log" Jan 26 12:01:29 crc kubenswrapper[4867]: I0126 12:01:29.202919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:01:29 crc kubenswrapper[4867]: I0126 12:01:29.203369 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:01:29 crc kubenswrapper[4867]: I0126 12:01:29.253590 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:01:29 crc kubenswrapper[4867]: I0126 12:01:29.636546 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g2p8j" Jan 26 12:01:29 crc kubenswrapper[4867]: I0126 12:01:29.882557 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2p8j"] Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.064524 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6pt2"] Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.065004 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p6pt2" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="registry-server" containerID="cri-o://88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a" gracePeriod=2 Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.550830 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.627606 4867 generic.go:334] "Generic (PLEG): container finished" podID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerID="88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a" exitCode=0 Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.630994 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6pt2" event={"ID":"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7","Type":"ContainerDied","Data":"88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a"} Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.631084 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6pt2" event={"ID":"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7","Type":"ContainerDied","Data":"187ab09487aff3145821227d5d17ff42a149c1f0acf0c3b3a7dc2c16efd58aa4"} Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.631057 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6pt2" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.631547 4867 scope.go:117] "RemoveContainer" containerID="88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.670357 4867 scope.go:117] "RemoveContainer" containerID="9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.727992 4867 scope.go:117] "RemoveContainer" containerID="2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.754827 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-catalog-content\") pod \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.754929 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-utilities\") pod \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.755004 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlhlx\" (UniqueName: \"kubernetes.io/projected/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-kube-api-access-mlhlx\") pod \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\" (UID: \"15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7\") " Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.759426 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-utilities" (OuterVolumeSpecName: "utilities") pod "15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" (UID: "15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.768916 4867 scope.go:117] "RemoveContainer" containerID="88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.774513 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-kube-api-access-mlhlx" (OuterVolumeSpecName: "kube-api-access-mlhlx") pod "15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" (UID: "15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7"). InnerVolumeSpecName "kube-api-access-mlhlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:01:30 crc kubenswrapper[4867]: E0126 12:01:30.778387 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a\": container with ID starting with 88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a not found: ID does not exist" containerID="88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.778437 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a"} err="failed to get container status \"88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a\": rpc error: code = NotFound desc = could not find container \"88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a\": container with ID starting with 88c70dcfb6e0088d11b43d03d3fa417a867baf558acea4d3905c207387b2388a not found: ID does not exist" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.778463 4867 scope.go:117] "RemoveContainer" containerID="9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3" Jan 26 12:01:30 crc kubenswrapper[4867]: E0126 12:01:30.780995 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3\": container with ID starting with 9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3 not found: ID does not exist" containerID="9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.781058 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3"} err="failed to get container status \"9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3\": rpc error: code = NotFound desc = could not find container \"9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3\": container with ID starting with 9c998fc8fd1d06e2fa75f7c6a87fc83e23ab4fa0331710416fd563f279c782e3 not found: ID does not exist" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.781093 4867 scope.go:117] "RemoveContainer" containerID="2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43" Jan 26 12:01:30 crc kubenswrapper[4867]: E0126 12:01:30.781646 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43\": container with ID starting with 2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43 not found: ID does not exist" containerID="2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.781704 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43"} err="failed to get container status \"2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43\": rpc error: code = NotFound desc = could not find container \"2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43\": container with ID starting with 2c9c39fe51333692d436aa30b79e8859f1541b89c1f3426f3d998d781452be43 not found: ID does not exist" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.857252 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.857294 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlhlx\" (UniqueName: \"kubernetes.io/projected/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-kube-api-access-mlhlx\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.861068 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" (UID: "15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.959377 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.962954 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6pt2"] Jan 26 12:01:30 crc kubenswrapper[4867]: I0126 12:01:30.970145 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p6pt2"] Jan 26 12:01:32 crc kubenswrapper[4867]: I0126 12:01:32.575760 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" path="/var/lib/kubelet/pods/15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7/volumes" Jan 26 12:01:52 crc kubenswrapper[4867]: I0126 12:01:52.418014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-496nf_6e82409c-e6fc-4a6b-964f-95fee3ed959d/kube-rbac-proxy/0.log" Jan 26 12:01:52 crc kubenswrapper[4867]: I0126 12:01:52.501810 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-496nf_6e82409c-e6fc-4a6b-964f-95fee3ed959d/controller/0.log" Jan 26 12:01:52 crc kubenswrapper[4867]: I0126 12:01:52.617186 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 12:01:52 crc kubenswrapper[4867]: I0126 12:01:52.836518 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 12:01:52 crc kubenswrapper[4867]: I0126 12:01:52.858848 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 12:01:52 crc kubenswrapper[4867]: I0126 12:01:52.870250 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 12:01:52 crc kubenswrapper[4867]: I0126 12:01:52.875553 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.072151 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.087894 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.109097 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.129777 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.277038 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-frr-files/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.301454 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/controller/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.302138 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-reloader/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.303280 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/cp-metrics/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.484982 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/frr-metrics/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.496435 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/kube-rbac-proxy-frr/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.539616 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/kube-rbac-proxy/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.720044 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/reloader/0.log" Jan 26 12:01:53 crc kubenswrapper[4867]: I0126 12:01:53.772430 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mtgxn_7d39a9a1-98f9-4404-a415-867570383af9/frr-k8s-webhook-server/0.log" Jan 26 12:01:54 crc kubenswrapper[4867]: I0126 12:01:54.002005 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b6879bdfc-xwrhn_e6c18bce-ada3-4e21-8a80-fa9bc4fa01f4/manager/0.log" Jan 26 12:01:54 crc kubenswrapper[4867]: I0126 12:01:54.198900 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cbc548b4-c9cg5_0898e985-06ad-4cde-b358-75c0e395d72d/webhook-server/0.log" Jan 26 12:01:54 crc kubenswrapper[4867]: I0126 12:01:54.244909 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xzzx4_29fc757d-2542-48c5-bea3-05ff023baa05/kube-rbac-proxy/0.log" Jan 26 12:01:54 crc kubenswrapper[4867]: I0126 12:01:54.373144 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fdvhb_a5badbe6-91c6-424f-b422-df4fe4761e26/frr/0.log" Jan 26 12:01:54 crc kubenswrapper[4867]: I0126 12:01:54.677163 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xzzx4_29fc757d-2542-48c5-bea3-05ff023baa05/speaker/0.log" Jan 26 12:02:06 crc kubenswrapper[4867]: I0126 12:02:06.673238 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/util/0.log" Jan 26 12:02:06 crc kubenswrapper[4867]: I0126 12:02:06.877208 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/pull/0.log" Jan 26 12:02:06 crc kubenswrapper[4867]: I0126 12:02:06.904240 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/util/0.log" Jan 26 12:02:06 crc kubenswrapper[4867]: I0126 12:02:06.959648 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/pull/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.144618 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/util/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.154024 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/pull/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.169976 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcccbgd_5037ef99-c48d-4c78-a3bd-d767d51ab43f/extract/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.326535 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/util/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.484545 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/pull/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.511800 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/util/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.553770 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/pull/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.725600 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/extract/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.751850 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/util/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.757499 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713t8lrz_4d062b30-7ca4-4191-89d4-21c153fbf3dc/pull/0.log" Jan 26 12:02:07 crc kubenswrapper[4867]: I0126 12:02:07.943941 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-utilities/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.101563 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-utilities/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.133945 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-content/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.180649 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-content/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.349808 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-content/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.396388 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/extract-utilities/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.662180 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jqxkw_f0d09d9b-e570-45a9-9511-c95b88f2ffd7/registry-server/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.734892 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-utilities/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.883618 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-utilities/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.888440 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-content/0.log" Jan 26 12:02:08 crc kubenswrapper[4867]: I0126 12:02:08.891455 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-content/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.133370 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-utilities/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.151472 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/extract-content/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.424610 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f7s2h_d30c958f-102e-4d3f-a3e1-853ad02e7bfe/marketplace-operator/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.505692 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-utilities/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.582181 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2hms5_547f161f-485f-4a09-909f-df4f3990046f/registry-server/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.637469 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-utilities/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.664795 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-content/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.699478 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-content/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.867006 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-utilities/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.906085 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/extract-content/0.log" Jan 26 12:02:09 crc kubenswrapper[4867]: I0126 12:02:09.992341 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m24tl_f59c5f80-bfa1-445a-a552-ef0908b15efd/registry-server/0.log" Jan 26 12:02:10 crc kubenswrapper[4867]: I0126 12:02:10.031109 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g2p8j_28baae48-3e74-4b8a-83c4-c0ef13bac140/extract-utilities/0.log" Jan 26 12:02:10 crc kubenswrapper[4867]: I0126 12:02:10.247708 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g2p8j_28baae48-3e74-4b8a-83c4-c0ef13bac140/extract-utilities/0.log" Jan 26 12:02:10 crc kubenswrapper[4867]: I0126 12:02:10.268885 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g2p8j_28baae48-3e74-4b8a-83c4-c0ef13bac140/extract-content/0.log" Jan 26 12:02:10 crc kubenswrapper[4867]: I0126 12:02:10.277950 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g2p8j_28baae48-3e74-4b8a-83c4-c0ef13bac140/extract-content/0.log" Jan 26 12:02:10 crc kubenswrapper[4867]: I0126 12:02:10.456913 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g2p8j_28baae48-3e74-4b8a-83c4-c0ef13bac140/extract-utilities/0.log" Jan 26 12:02:10 crc kubenswrapper[4867]: I0126 12:02:10.500115 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g2p8j_28baae48-3e74-4b8a-83c4-c0ef13bac140/extract-content/0.log" Jan 26 12:02:10 crc kubenswrapper[4867]: I0126 12:02:10.612384 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g2p8j_28baae48-3e74-4b8a-83c4-c0ef13bac140/registry-server/0.log" Jan 26 12:02:36 crc kubenswrapper[4867]: I0126 12:02:36.294019 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:02:36 crc kubenswrapper[4867]: I0126 12:02:36.294562 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:03:06 crc kubenswrapper[4867]: I0126 12:03:06.294022 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:03:06 crc kubenswrapper[4867]: I0126 12:03:06.294566 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.293760 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.294544 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.294643 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.295985 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8c591e2190a01878a2643916d15100018088284afda1447941615892cd3e504"} pod="openshift-machine-config-operator/machine-config-daemon-g6cth" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.296088 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" containerID="cri-o://c8c591e2190a01878a2643916d15100018088284afda1447941615892cd3e504" gracePeriod=600 Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.725337 4867 generic.go:334] "Generic (PLEG): container finished" podID="115cad9f-057f-4e63-b408-8fa7a358a191" containerID="c8c591e2190a01878a2643916d15100018088284afda1447941615892cd3e504" exitCode=0 Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.725719 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerDied","Data":"c8c591e2190a01878a2643916d15100018088284afda1447941615892cd3e504"} Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.725751 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" event={"ID":"115cad9f-057f-4e63-b408-8fa7a358a191","Type":"ContainerStarted","Data":"9a3dd360898695daa1d67ee606fd3b5716a25eb9942eeaf1be5f42d084a063a9"} Jan 26 12:03:36 crc kubenswrapper[4867]: I0126 12:03:36.725789 4867 scope.go:117] "RemoveContainer" containerID="b468a5733b70f2daff8e0e41bb36084cdf82f55dbb0bac51d0d68f1ce3f30b64" Jan 26 12:03:45 crc kubenswrapper[4867]: I0126 12:03:45.814178 4867 generic.go:334] "Generic (PLEG): container finished" podID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerID="54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da" exitCode=0 Jan 26 12:03:45 crc kubenswrapper[4867]: I0126 12:03:45.814271 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-drwwh/must-gather-cv25j" event={"ID":"bd819386-7af9-4fe3-b59d-b70bfa3cfac3","Type":"ContainerDied","Data":"54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da"} Jan 26 12:03:45 crc kubenswrapper[4867]: I0126 12:03:45.815490 4867 scope.go:117] "RemoveContainer" containerID="54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da" Jan 26 12:03:46 crc kubenswrapper[4867]: I0126 12:03:46.538007 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-drwwh_must-gather-cv25j_bd819386-7af9-4fe3-b59d-b70bfa3cfac3/gather/0.log" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.165391 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-drwwh/must-gather-cv25j"] Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.166478 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-drwwh/must-gather-cv25j" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerName="copy" containerID="cri-o://89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf" gracePeriod=2 Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.178949 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-drwwh/must-gather-cv25j"] Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.582151 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-drwwh_must-gather-cv25j_bd819386-7af9-4fe3-b59d-b70bfa3cfac3/copy/0.log" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.582974 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.691207 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkr69\" (UniqueName: \"kubernetes.io/projected/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-kube-api-access-zkr69\") pod \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.691484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-must-gather-output\") pod \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\" (UID: \"bd819386-7af9-4fe3-b59d-b70bfa3cfac3\") " Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.698103 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-kube-api-access-zkr69" (OuterVolumeSpecName: "kube-api-access-zkr69") pod "bd819386-7af9-4fe3-b59d-b70bfa3cfac3" (UID: "bd819386-7af9-4fe3-b59d-b70bfa3cfac3"). InnerVolumeSpecName "kube-api-access-zkr69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.793182 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkr69\" (UniqueName: \"kubernetes.io/projected/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-kube-api-access-zkr69\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.828605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bd819386-7af9-4fe3-b59d-b70bfa3cfac3" (UID: "bd819386-7af9-4fe3-b59d-b70bfa3cfac3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.896269 4867 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd819386-7af9-4fe3-b59d-b70bfa3cfac3-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.912262 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-drwwh_must-gather-cv25j_bd819386-7af9-4fe3-b59d-b70bfa3cfac3/copy/0.log" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.912760 4867 generic.go:334] "Generic (PLEG): container finished" podID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerID="89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf" exitCode=143 Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.912821 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-drwwh/must-gather-cv25j" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.912840 4867 scope.go:117] "RemoveContainer" containerID="89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf" Jan 26 12:03:56 crc kubenswrapper[4867]: I0126 12:03:56.938967 4867 scope.go:117] "RemoveContainer" containerID="54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da" Jan 26 12:03:57 crc kubenswrapper[4867]: I0126 12:03:57.028890 4867 scope.go:117] "RemoveContainer" containerID="89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf" Jan 26 12:03:57 crc kubenswrapper[4867]: E0126 12:03:57.029496 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf\": container with ID starting with 89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf not found: ID does not exist" containerID="89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf" Jan 26 12:03:57 crc kubenswrapper[4867]: I0126 12:03:57.029545 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf"} err="failed to get container status \"89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf\": rpc error: code = NotFound desc = could not find container \"89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf\": container with ID starting with 89e282fb0686d2b2c388feece24ce78e41b706774624017edbe6c54080e0cdaf not found: ID does not exist" Jan 26 12:03:57 crc kubenswrapper[4867]: I0126 12:03:57.029575 4867 scope.go:117] "RemoveContainer" containerID="54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da" Jan 26 12:03:57 crc kubenswrapper[4867]: E0126 12:03:57.029946 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da\": container with ID starting with 54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da not found: ID does not exist" containerID="54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da" Jan 26 12:03:57 crc kubenswrapper[4867]: I0126 12:03:57.030000 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da"} err="failed to get container status \"54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da\": rpc error: code = NotFound desc = could not find container \"54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da\": container with ID starting with 54a9b403e8d6ad973cefd5f1e52119536ccc6f6d10a7f3fed630f7f8c5ab21da not found: ID does not exist" Jan 26 12:03:58 crc kubenswrapper[4867]: I0126 12:03:58.580748 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" path="/var/lib/kubelet/pods/bd819386-7af9-4fe3-b59d-b70bfa3cfac3/volumes" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.523060 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8d5pj"] Jan 26 12:04:24 crc kubenswrapper[4867]: E0126 12:04:24.524086 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="extract-utilities" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524102 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="extract-utilities" Jan 26 12:04:24 crc kubenswrapper[4867]: E0126 12:04:24.524127 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerName="gather" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524135 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerName="gather" Jan 26 12:04:24 crc kubenswrapper[4867]: E0126 12:04:24.524148 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a269552-b711-4f3f-85c7-3e603612cd21" containerName="keystone-cron" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a269552-b711-4f3f-85c7-3e603612cd21" containerName="keystone-cron" Jan 26 12:04:24 crc kubenswrapper[4867]: E0126 12:04:24.524179 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="extract-content" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524187 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="extract-content" Jan 26 12:04:24 crc kubenswrapper[4867]: E0126 12:04:24.524199 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="registry-server" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524206 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="registry-server" Jan 26 12:04:24 crc kubenswrapper[4867]: E0126 12:04:24.524241 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerName="copy" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524249 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerName="copy" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524471 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="15505f79-e3ef-4aa2-8f0d-6d6c4b097fc7" containerName="registry-server" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524488 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerName="copy" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524497 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd819386-7af9-4fe3-b59d-b70bfa3cfac3" containerName="gather" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.524512 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a269552-b711-4f3f-85c7-3e603612cd21" containerName="keystone-cron" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.526105 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.581441 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d5pj"] Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.662122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-catalog-content\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.662588 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-utilities\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.664085 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwlt5\" (UniqueName: \"kubernetes.io/projected/9d985058-99d0-46b5-9f80-7dfd388a8cf2-kube-api-access-vwlt5\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.766238 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwlt5\" (UniqueName: \"kubernetes.io/projected/9d985058-99d0-46b5-9f80-7dfd388a8cf2-kube-api-access-vwlt5\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.766349 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-catalog-content\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.766369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-utilities\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.766919 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-utilities\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.767186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-catalog-content\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.790116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwlt5\" (UniqueName: \"kubernetes.io/projected/9d985058-99d0-46b5-9f80-7dfd388a8cf2-kube-api-access-vwlt5\") pod \"redhat-marketplace-8d5pj\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:24 crc kubenswrapper[4867]: I0126 12:04:24.888991 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:25 crc kubenswrapper[4867]: I0126 12:04:25.447718 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d5pj"] Jan 26 12:04:26 crc kubenswrapper[4867]: I0126 12:04:26.166020 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerID="d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6" exitCode=0 Jan 26 12:04:26 crc kubenswrapper[4867]: I0126 12:04:26.166414 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d5pj" event={"ID":"9d985058-99d0-46b5-9f80-7dfd388a8cf2","Type":"ContainerDied","Data":"d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6"} Jan 26 12:04:26 crc kubenswrapper[4867]: I0126 12:04:26.166450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d5pj" event={"ID":"9d985058-99d0-46b5-9f80-7dfd388a8cf2","Type":"ContainerStarted","Data":"98a7a96559760020e57d2cc8dbf45a7e381373ca3a3d5b9918b8368dd01f9395"} Jan 26 12:04:28 crc kubenswrapper[4867]: I0126 12:04:28.183320 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerID="2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6" exitCode=0 Jan 26 12:04:28 crc kubenswrapper[4867]: I0126 12:04:28.183497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d5pj" event={"ID":"9d985058-99d0-46b5-9f80-7dfd388a8cf2","Type":"ContainerDied","Data":"2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6"} Jan 26 12:04:29 crc kubenswrapper[4867]: I0126 12:04:29.193292 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d5pj" event={"ID":"9d985058-99d0-46b5-9f80-7dfd388a8cf2","Type":"ContainerStarted","Data":"eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657"} Jan 26 12:04:29 crc kubenswrapper[4867]: I0126 12:04:29.211417 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8d5pj" podStartSLOduration=2.760810454 podStartE2EDuration="5.211401501s" podCreationTimestamp="2026-01-26 12:04:24 +0000 UTC" firstStartedPulling="2026-01-26 12:04:26.167935765 +0000 UTC m=+2815.866510675" lastFinishedPulling="2026-01-26 12:04:28.618526812 +0000 UTC m=+2818.317101722" observedRunningTime="2026-01-26 12:04:29.211266408 +0000 UTC m=+2818.909841328" watchObservedRunningTime="2026-01-26 12:04:29.211401501 +0000 UTC m=+2818.909976411" Jan 26 12:04:34 crc kubenswrapper[4867]: I0126 12:04:34.889262 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:34 crc kubenswrapper[4867]: I0126 12:04:34.890267 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:34 crc kubenswrapper[4867]: I0126 12:04:34.954508 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:35 crc kubenswrapper[4867]: I0126 12:04:35.289247 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:35 crc kubenswrapper[4867]: I0126 12:04:35.423779 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d5pj"] Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.262459 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8d5pj" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="registry-server" containerID="cri-o://eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657" gracePeriod=2 Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.708023 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.850704 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-catalog-content\") pod \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.850747 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwlt5\" (UniqueName: \"kubernetes.io/projected/9d985058-99d0-46b5-9f80-7dfd388a8cf2-kube-api-access-vwlt5\") pod \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.850804 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-utilities\") pod \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\" (UID: \"9d985058-99d0-46b5-9f80-7dfd388a8cf2\") " Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.851781 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-utilities" (OuterVolumeSpecName: "utilities") pod "9d985058-99d0-46b5-9f80-7dfd388a8cf2" (UID: "9d985058-99d0-46b5-9f80-7dfd388a8cf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.856388 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d985058-99d0-46b5-9f80-7dfd388a8cf2-kube-api-access-vwlt5" (OuterVolumeSpecName: "kube-api-access-vwlt5") pod "9d985058-99d0-46b5-9f80-7dfd388a8cf2" (UID: "9d985058-99d0-46b5-9f80-7dfd388a8cf2"). InnerVolumeSpecName "kube-api-access-vwlt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.873202 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d985058-99d0-46b5-9f80-7dfd388a8cf2" (UID: "9d985058-99d0-46b5-9f80-7dfd388a8cf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.952553 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.952591 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwlt5\" (UniqueName: \"kubernetes.io/projected/9d985058-99d0-46b5-9f80-7dfd388a8cf2-kube-api-access-vwlt5\") on node \"crc\" DevicePath \"\"" Jan 26 12:04:37 crc kubenswrapper[4867]: I0126 12:04:37.952603 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d985058-99d0-46b5-9f80-7dfd388a8cf2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.272257 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerID="eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657" exitCode=0 Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.272326 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d5pj" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.272349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d5pj" event={"ID":"9d985058-99d0-46b5-9f80-7dfd388a8cf2","Type":"ContainerDied","Data":"eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657"} Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.272749 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d5pj" event={"ID":"9d985058-99d0-46b5-9f80-7dfd388a8cf2","Type":"ContainerDied","Data":"98a7a96559760020e57d2cc8dbf45a7e381373ca3a3d5b9918b8368dd01f9395"} Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.272776 4867 scope.go:117] "RemoveContainer" containerID="eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.296563 4867 scope.go:117] "RemoveContainer" containerID="2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.317154 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d5pj"] Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.327002 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d5pj"] Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.335770 4867 scope.go:117] "RemoveContainer" containerID="d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.369383 4867 scope.go:117] "RemoveContainer" containerID="eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657" Jan 26 12:04:38 crc kubenswrapper[4867]: E0126 12:04:38.369947 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657\": container with ID starting with eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657 not found: ID does not exist" containerID="eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.369983 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657"} err="failed to get container status \"eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657\": rpc error: code = NotFound desc = could not find container \"eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657\": container with ID starting with eda1b8294bdab0978eef1a48dcdbafbd2848fe16e3a519d03ae60b9f87178657 not found: ID does not exist" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.370004 4867 scope.go:117] "RemoveContainer" containerID="2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6" Jan 26 12:04:38 crc kubenswrapper[4867]: E0126 12:04:38.371417 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6\": container with ID starting with 2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6 not found: ID does not exist" containerID="2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.371460 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6"} err="failed to get container status \"2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6\": rpc error: code = NotFound desc = could not find container \"2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6\": container with ID starting with 2cda11f21aea6eed5a193f7088c7df66ad628f91951c6a28b794e93be44e77c6 not found: ID does not exist" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.371489 4867 scope.go:117] "RemoveContainer" containerID="d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6" Jan 26 12:04:38 crc kubenswrapper[4867]: E0126 12:04:38.371887 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6\": container with ID starting with d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6 not found: ID does not exist" containerID="d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.371956 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6"} err="failed to get container status \"d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6\": rpc error: code = NotFound desc = could not find container \"d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6\": container with ID starting with d53d76d6e6fdab84fcbec0a72960a921b7537484715cd5b716a2be2f2424fad6 not found: ID does not exist" Jan 26 12:04:38 crc kubenswrapper[4867]: I0126 12:04:38.574869 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" path="/var/lib/kubelet/pods/9d985058-99d0-46b5-9f80-7dfd388a8cf2/volumes" Jan 26 12:05:04 crc kubenswrapper[4867]: I0126 12:05:04.910511 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nln79"] Jan 26 12:05:04 crc kubenswrapper[4867]: E0126 12:05:04.911589 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="extract-content" Jan 26 12:05:04 crc kubenswrapper[4867]: I0126 12:05:04.911606 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="extract-content" Jan 26 12:05:04 crc kubenswrapper[4867]: E0126 12:05:04.911624 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="extract-utilities" Jan 26 12:05:04 crc kubenswrapper[4867]: I0126 12:05:04.911632 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="extract-utilities" Jan 26 12:05:04 crc kubenswrapper[4867]: E0126 12:05:04.911660 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="registry-server" Jan 26 12:05:04 crc kubenswrapper[4867]: I0126 12:05:04.911669 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="registry-server" Jan 26 12:05:04 crc kubenswrapper[4867]: I0126 12:05:04.911907 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d985058-99d0-46b5-9f80-7dfd388a8cf2" containerName="registry-server" Jan 26 12:05:04 crc kubenswrapper[4867]: I0126 12:05:04.913687 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:04 crc kubenswrapper[4867]: I0126 12:05:04.933377 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nln79"] Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.036465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-catalog-content\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.036970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwl4\" (UniqueName: \"kubernetes.io/projected/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-kube-api-access-4dwl4\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.037061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-utilities\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.102496 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hk4v8"] Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.104526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.118900 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hk4v8"] Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.138697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-utilities\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.138779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-catalog-content\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.138914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dwl4\" (UniqueName: \"kubernetes.io/projected/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-kube-api-access-4dwl4\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.139340 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-catalog-content\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.139433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-utilities\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.169321 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dwl4\" (UniqueName: \"kubernetes.io/projected/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-kube-api-access-4dwl4\") pod \"community-operators-nln79\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.240732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqgc\" (UniqueName: \"kubernetes.io/projected/42b52b06-e34e-4a25-b45f-d0f8ad36c257-kube-api-access-nsqgc\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.240793 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-catalog-content\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.241567 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.241648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-utilities\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.343811 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsqgc\" (UniqueName: \"kubernetes.io/projected/42b52b06-e34e-4a25-b45f-d0f8ad36c257-kube-api-access-nsqgc\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.343864 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-catalog-content\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.343891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-utilities\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.344365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-utilities\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.344437 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-catalog-content\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.364325 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsqgc\" (UniqueName: \"kubernetes.io/projected/42b52b06-e34e-4a25-b45f-d0f8ad36c257-kube-api-access-nsqgc\") pod \"certified-operators-hk4v8\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.422329 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.862520 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nln79"] Jan 26 12:05:05 crc kubenswrapper[4867]: I0126 12:05:05.991747 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hk4v8"] Jan 26 12:05:05 crc kubenswrapper[4867]: W0126 12:05:05.993070 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42b52b06_e34e_4a25_b45f_d0f8ad36c257.slice/crio-d3a832f97fd20f5e316eb0e4ee7adc69f5f54cb0e46cedf89a65baed59a10fbd WatchSource:0}: Error finding container d3a832f97fd20f5e316eb0e4ee7adc69f5f54cb0e46cedf89a65baed59a10fbd: Status 404 returned error can't find the container with id d3a832f97fd20f5e316eb0e4ee7adc69f5f54cb0e46cedf89a65baed59a10fbd Jan 26 12:05:06 crc kubenswrapper[4867]: I0126 12:05:06.518362 4867 generic.go:334] "Generic (PLEG): container finished" podID="e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" containerID="63fff923f0e006246219d05ae5bca4c1d3fd9ec4e8f9130eafde1fe7ec58acea" exitCode=0 Jan 26 12:05:06 crc kubenswrapper[4867]: I0126 12:05:06.518457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nln79" event={"ID":"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a","Type":"ContainerDied","Data":"63fff923f0e006246219d05ae5bca4c1d3fd9ec4e8f9130eafde1fe7ec58acea"} Jan 26 12:05:06 crc kubenswrapper[4867]: I0126 12:05:06.518491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nln79" event={"ID":"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a","Type":"ContainerStarted","Data":"df1d831a5b9ecb60de4785eecf3dde7669909be25feffb645dfbcc0a4b093514"} Jan 26 12:05:06 crc kubenswrapper[4867]: I0126 12:05:06.519908 4867 generic.go:334] "Generic (PLEG): container finished" podID="42b52b06-e34e-4a25-b45f-d0f8ad36c257" containerID="24ab105534763f0b2ad3b7855656314eefc31fcd870a537bde9f66b1738d91a0" exitCode=0 Jan 26 12:05:06 crc kubenswrapper[4867]: I0126 12:05:06.519930 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk4v8" event={"ID":"42b52b06-e34e-4a25-b45f-d0f8ad36c257","Type":"ContainerDied","Data":"24ab105534763f0b2ad3b7855656314eefc31fcd870a537bde9f66b1738d91a0"} Jan 26 12:05:06 crc kubenswrapper[4867]: I0126 12:05:06.519944 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk4v8" event={"ID":"42b52b06-e34e-4a25-b45f-d0f8ad36c257","Type":"ContainerStarted","Data":"d3a832f97fd20f5e316eb0e4ee7adc69f5f54cb0e46cedf89a65baed59a10fbd"} Jan 26 12:05:08 crc kubenswrapper[4867]: I0126 12:05:08.537515 4867 generic.go:334] "Generic (PLEG): container finished" podID="e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" containerID="a2baf4159d509ec081aed0e2b956e6dd8ae53be74b81c7eb277010b9190b6f82" exitCode=0 Jan 26 12:05:08 crc kubenswrapper[4867]: I0126 12:05:08.537565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nln79" event={"ID":"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a","Type":"ContainerDied","Data":"a2baf4159d509ec081aed0e2b956e6dd8ae53be74b81c7eb277010b9190b6f82"} Jan 26 12:05:08 crc kubenswrapper[4867]: I0126 12:05:08.542739 4867 generic.go:334] "Generic (PLEG): container finished" podID="42b52b06-e34e-4a25-b45f-d0f8ad36c257" containerID="72662f0a8ea194fcd51f12bc1af3b9d3d0662f85cfeb80782af6bc78bae433ae" exitCode=0 Jan 26 12:05:08 crc kubenswrapper[4867]: I0126 12:05:08.542790 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk4v8" event={"ID":"42b52b06-e34e-4a25-b45f-d0f8ad36c257","Type":"ContainerDied","Data":"72662f0a8ea194fcd51f12bc1af3b9d3d0662f85cfeb80782af6bc78bae433ae"} Jan 26 12:05:10 crc kubenswrapper[4867]: I0126 12:05:10.561115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nln79" event={"ID":"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a","Type":"ContainerStarted","Data":"bc855d17d5eabc30745b19a16df3db01c0360784366e637769df79077f5dc1cc"} Jan 26 12:05:10 crc kubenswrapper[4867]: I0126 12:05:10.563459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk4v8" event={"ID":"42b52b06-e34e-4a25-b45f-d0f8ad36c257","Type":"ContainerStarted","Data":"01ba927993639f94ad79655fce174115a78069d61f0be0f71d9f3848ea2f7734"} Jan 26 12:05:10 crc kubenswrapper[4867]: I0126 12:05:10.590272 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nln79" podStartSLOduration=3.502345801 podStartE2EDuration="6.590251998s" podCreationTimestamp="2026-01-26 12:05:04 +0000 UTC" firstStartedPulling="2026-01-26 12:05:06.522900583 +0000 UTC m=+2856.221475493" lastFinishedPulling="2026-01-26 12:05:09.61080678 +0000 UTC m=+2859.309381690" observedRunningTime="2026-01-26 12:05:10.578981911 +0000 UTC m=+2860.277556821" watchObservedRunningTime="2026-01-26 12:05:10.590251998 +0000 UTC m=+2860.288826908" Jan 26 12:05:10 crc kubenswrapper[4867]: I0126 12:05:10.619112 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hk4v8" podStartSLOduration=2.542222057 podStartE2EDuration="5.619092744s" podCreationTimestamp="2026-01-26 12:05:05 +0000 UTC" firstStartedPulling="2026-01-26 12:05:06.522159093 +0000 UTC m=+2856.220734003" lastFinishedPulling="2026-01-26 12:05:09.59902978 +0000 UTC m=+2859.297604690" observedRunningTime="2026-01-26 12:05:10.613618804 +0000 UTC m=+2860.312193714" watchObservedRunningTime="2026-01-26 12:05:10.619092744 +0000 UTC m=+2860.317667654" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.242590 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.243502 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.315160 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.423172 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.423324 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.500449 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.653449 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:15 crc kubenswrapper[4867]: I0126 12:05:15.656956 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:16 crc kubenswrapper[4867]: I0126 12:05:16.891933 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hk4v8"] Jan 26 12:05:17 crc kubenswrapper[4867]: I0126 12:05:17.497307 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nln79"] Jan 26 12:05:17 crc kubenswrapper[4867]: I0126 12:05:17.626623 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hk4v8" podUID="42b52b06-e34e-4a25-b45f-d0f8ad36c257" containerName="registry-server" containerID="cri-o://01ba927993639f94ad79655fce174115a78069d61f0be0f71d9f3848ea2f7734" gracePeriod=2 Jan 26 12:05:17 crc kubenswrapper[4867]: I0126 12:05:17.626818 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nln79" podUID="e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" containerName="registry-server" containerID="cri-o://bc855d17d5eabc30745b19a16df3db01c0360784366e637769df79077f5dc1cc" gracePeriod=2 Jan 26 12:05:19 crc kubenswrapper[4867]: I0126 12:05:19.645822 4867 generic.go:334] "Generic (PLEG): container finished" podID="42b52b06-e34e-4a25-b45f-d0f8ad36c257" containerID="01ba927993639f94ad79655fce174115a78069d61f0be0f71d9f3848ea2f7734" exitCode=0 Jan 26 12:05:19 crc kubenswrapper[4867]: I0126 12:05:19.646115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk4v8" event={"ID":"42b52b06-e34e-4a25-b45f-d0f8ad36c257","Type":"ContainerDied","Data":"01ba927993639f94ad79655fce174115a78069d61f0be0f71d9f3848ea2f7734"} Jan 26 12:05:19 crc kubenswrapper[4867]: I0126 12:05:19.650772 4867 generic.go:334] "Generic (PLEG): container finished" podID="e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" containerID="bc855d17d5eabc30745b19a16df3db01c0360784366e637769df79077f5dc1cc" exitCode=0 Jan 26 12:05:19 crc kubenswrapper[4867]: I0126 12:05:19.650818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nln79" event={"ID":"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a","Type":"ContainerDied","Data":"bc855d17d5eabc30745b19a16df3db01c0360784366e637769df79077f5dc1cc"} Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.202255 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.207295 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.311370 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-utilities\") pod \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.312351 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-utilities" (OuterVolumeSpecName: "utilities") pod "e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" (UID: "e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.312548 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsqgc\" (UniqueName: \"kubernetes.io/projected/42b52b06-e34e-4a25-b45f-d0f8ad36c257-kube-api-access-nsqgc\") pod \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.313284 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-catalog-content\") pod \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.313369 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-utilities\") pod \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.313758 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dwl4\" (UniqueName: \"kubernetes.io/projected/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-kube-api-access-4dwl4\") pod \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\" (UID: \"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a\") " Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.313784 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-catalog-content\") pod \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\" (UID: \"42b52b06-e34e-4a25-b45f-d0f8ad36c257\") " Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.314322 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-utilities" (OuterVolumeSpecName: "utilities") pod "42b52b06-e34e-4a25-b45f-d0f8ad36c257" (UID: "42b52b06-e34e-4a25-b45f-d0f8ad36c257"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.314675 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.314699 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.318147 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b52b06-e34e-4a25-b45f-d0f8ad36c257-kube-api-access-nsqgc" (OuterVolumeSpecName: "kube-api-access-nsqgc") pod "42b52b06-e34e-4a25-b45f-d0f8ad36c257" (UID: "42b52b06-e34e-4a25-b45f-d0f8ad36c257"). InnerVolumeSpecName "kube-api-access-nsqgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.322336 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-kube-api-access-4dwl4" (OuterVolumeSpecName: "kube-api-access-4dwl4") pod "e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" (UID: "e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a"). InnerVolumeSpecName "kube-api-access-4dwl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.366168 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42b52b06-e34e-4a25-b45f-d0f8ad36c257" (UID: "42b52b06-e34e-4a25-b45f-d0f8ad36c257"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.369117 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" (UID: "e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.416796 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsqgc\" (UniqueName: \"kubernetes.io/projected/42b52b06-e34e-4a25-b45f-d0f8ad36c257-kube-api-access-nsqgc\") on node \"crc\" DevicePath \"\"" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.416833 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.416843 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dwl4\" (UniqueName: \"kubernetes.io/projected/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a-kube-api-access-4dwl4\") on node \"crc\" DevicePath \"\"" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.416851 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b52b06-e34e-4a25-b45f-d0f8ad36c257-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.659546 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk4v8" event={"ID":"42b52b06-e34e-4a25-b45f-d0f8ad36c257","Type":"ContainerDied","Data":"d3a832f97fd20f5e316eb0e4ee7adc69f5f54cb0e46cedf89a65baed59a10fbd"} Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.659597 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hk4v8" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.659648 4867 scope.go:117] "RemoveContainer" containerID="01ba927993639f94ad79655fce174115a78069d61f0be0f71d9f3848ea2f7734" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.664466 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nln79" event={"ID":"e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a","Type":"ContainerDied","Data":"df1d831a5b9ecb60de4785eecf3dde7669909be25feffb645dfbcc0a4b093514"} Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.664513 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nln79" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.691288 4867 scope.go:117] "RemoveContainer" containerID="72662f0a8ea194fcd51f12bc1af3b9d3d0662f85cfeb80782af6bc78bae433ae" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.691825 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hk4v8"] Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.705873 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hk4v8"] Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.711424 4867 scope.go:117] "RemoveContainer" containerID="24ab105534763f0b2ad3b7855656314eefc31fcd870a537bde9f66b1738d91a0" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.714595 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nln79"] Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.725788 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nln79"] Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.733871 4867 scope.go:117] "RemoveContainer" containerID="bc855d17d5eabc30745b19a16df3db01c0360784366e637769df79077f5dc1cc" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.748881 4867 scope.go:117] "RemoveContainer" containerID="a2baf4159d509ec081aed0e2b956e6dd8ae53be74b81c7eb277010b9190b6f82" Jan 26 12:05:20 crc kubenswrapper[4867]: I0126 12:05:20.772039 4867 scope.go:117] "RemoveContainer" containerID="63fff923f0e006246219d05ae5bca4c1d3fd9ec4e8f9130eafde1fe7ec58acea" Jan 26 12:05:22 crc kubenswrapper[4867]: I0126 12:05:22.578209 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b52b06-e34e-4a25-b45f-d0f8ad36c257" path="/var/lib/kubelet/pods/42b52b06-e34e-4a25-b45f-d0f8ad36c257/volumes" Jan 26 12:05:22 crc kubenswrapper[4867]: I0126 12:05:22.579548 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a" path="/var/lib/kubelet/pods/e9e0e2a0-c0d9-4b1d-9cb3-cfcc9778232a/volumes" Jan 26 12:05:36 crc kubenswrapper[4867]: I0126 12:05:36.294021 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:05:36 crc kubenswrapper[4867]: I0126 12:05:36.294626 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 12:05:39 crc kubenswrapper[4867]: I0126 12:05:39.017994 4867 scope.go:117] "RemoveContainer" containerID="37c2ed4d39ed9dfe94a45a1614d065fb3c8d9d512e81e7c33a60e9971adb6153" Jan 26 12:05:39 crc kubenswrapper[4867]: I0126 12:05:39.059256 4867 scope.go:117] "RemoveContainer" containerID="7201e6f91582cacc2e9dd770e0f58b52a50e3986db8e1c8080473bad8917d9e0" Jan 26 12:06:06 crc kubenswrapper[4867]: I0126 12:06:06.294369 4867 patch_prober.go:28] interesting pod/machine-config-daemon-g6cth container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 12:06:06 crc kubenswrapper[4867]: I0126 12:06:06.295918 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g6cth" podUID="115cad9f-057f-4e63-b408-8fa7a358a191" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"